00:00:00.001 Started by upstream project "autotest-per-patch" build number 132367 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.122 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.174 Fetching changes from the remote Git repository 00:00:00.178 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.309 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.319 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.331 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.331 > git config core.sparsecheckout # timeout=10 00:00:04.342 > git read-tree -mu HEAD # timeout=10 00:00:04.359 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.381 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.381 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.466 [Pipeline] Start of Pipeline 00:00:04.478 [Pipeline] library 00:00:04.480 Loading library shm_lib@master 00:00:04.480 Library shm_lib@master is cached. Copying from home. 00:00:04.496 [Pipeline] node 00:00:04.513 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:04.515 [Pipeline] { 00:00:04.523 [Pipeline] catchError 00:00:04.524 [Pipeline] { 00:00:04.535 [Pipeline] wrap 00:00:04.543 [Pipeline] { 00:00:04.549 [Pipeline] stage 00:00:04.550 [Pipeline] { (Prologue) 00:00:04.563 [Pipeline] echo 00:00:04.564 Node: VM-host-SM38 00:00:04.568 [Pipeline] cleanWs 00:00:04.577 [WS-CLEANUP] Deleting project workspace... 00:00:04.577 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.584 [WS-CLEANUP] done 00:00:04.790 [Pipeline] setCustomBuildProperty 00:00:04.867 [Pipeline] httpRequest 00:00:05.634 [Pipeline] echo 00:00:05.636 Sorcerer 10.211.164.20 is alive 00:00:05.643 [Pipeline] retry 00:00:05.644 [Pipeline] { 00:00:05.653 [Pipeline] httpRequest 00:00:05.658 HttpMethod: GET 00:00:05.658 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.659 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:05.660 Response Code: HTTP/1.1 200 OK 00:00:05.661 Success: Status code 200 is in the accepted range: 200,404 00:00:05.662 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.443 [Pipeline] } 00:00:06.459 [Pipeline] // retry 00:00:06.465 [Pipeline] sh 00:00:06.746 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.763 [Pipeline] httpRequest 00:00:07.141 [Pipeline] echo 00:00:07.143 Sorcerer 10.211.164.20 is alive 00:00:07.154 [Pipeline] retry 00:00:07.156 [Pipeline] { 00:00:07.173 [Pipeline] httpRequest 00:00:07.178 HttpMethod: GET 00:00:07.178 URL: http://10.211.164.20/packages/spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:00:07.179 Sending request to url: http://10.211.164.20/packages/spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:00:07.192 Response Code: HTTP/1.1 200 OK 00:00:07.193 Success: Status code 200 is in the accepted range: 200,404 00:00:07.193 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:01:09.879 [Pipeline] } 00:01:09.898 [Pipeline] // retry 00:01:09.906 [Pipeline] sh 00:01:10.192 + tar --no-same-owner -xf spdk_4f0cbdcd1df06f049393c89a62a8c0fac223818a.tar.gz 00:01:13.513 [Pipeline] sh 00:01:13.800 + git -C spdk log --oneline -n5 00:01:13.800 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:01:13.800 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:01:13.800 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:01:13.800 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:01:13.800 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:01:13.822 [Pipeline] writeFile 00:01:13.839 [Pipeline] sh 00:01:14.127 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:14.141 [Pipeline] sh 00:01:14.441 + cat autorun-spdk.conf 00:01:14.441 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.441 SPDK_TEST_NVME=1 00:01:14.441 SPDK_TEST_FTL=1 00:01:14.441 SPDK_TEST_ISAL=1 00:01:14.441 SPDK_RUN_ASAN=1 00:01:14.441 SPDK_RUN_UBSAN=1 00:01:14.441 SPDK_TEST_XNVME=1 00:01:14.441 SPDK_TEST_NVME_FDP=1 00:01:14.441 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.450 RUN_NIGHTLY=0 00:01:14.452 [Pipeline] } 00:01:14.465 [Pipeline] // stage 00:01:14.480 [Pipeline] stage 00:01:14.482 [Pipeline] { (Run VM) 00:01:14.494 [Pipeline] sh 00:01:14.774 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:14.774 + echo 'Start stage prepare_nvme.sh' 00:01:14.774 Start stage prepare_nvme.sh 00:01:14.774 + [[ -n 6 ]] 00:01:14.774 + disk_prefix=ex6 00:01:14.774 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:01:14.774 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:01:14.774 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:01:14.774 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:14.774 ++ SPDK_TEST_NVME=1 00:01:14.774 ++ SPDK_TEST_FTL=1 00:01:14.774 ++ SPDK_TEST_ISAL=1 00:01:14.774 ++ SPDK_RUN_ASAN=1 00:01:14.774 ++ SPDK_RUN_UBSAN=1 00:01:14.774 ++ SPDK_TEST_XNVME=1 00:01:14.774 ++ SPDK_TEST_NVME_FDP=1 00:01:14.774 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:14.774 ++ RUN_NIGHTLY=0 00:01:14.774 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:01:14.774 + nvme_files=() 00:01:14.774 + declare -A nvme_files 00:01:14.774 + backend_dir=/var/lib/libvirt/images/backends 00:01:14.774 + nvme_files['nvme.img']=5G 00:01:14.774 + nvme_files['nvme-cmb.img']=5G 00:01:14.774 + nvme_files['nvme-multi0.img']=4G 00:01:14.774 + nvme_files['nvme-multi1.img']=4G 00:01:14.774 + nvme_files['nvme-multi2.img']=4G 00:01:14.774 + nvme_files['nvme-openstack.img']=8G 00:01:14.774 + nvme_files['nvme-zns.img']=5G 00:01:14.774 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:14.774 + (( SPDK_TEST_FTL == 1 )) 00:01:14.774 + nvme_files["nvme-ftl.img"]=6G 00:01:14.774 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:14.774 + nvme_files["nvme-fdp.img"]=1G 00:01:14.774 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:14.774 + for nvme in "${!nvme_files[@]}" 00:01:14.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:14.774 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:14.774 + for nvme in "${!nvme_files[@]}" 00:01:14.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-ftl.img -s 6G 00:01:14.774 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:14.774 + for nvme in "${!nvme_files[@]}" 00:01:14.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:14.774 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.774 + for nvme in "${!nvme_files[@]}" 00:01:14.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:14.774 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:14.774 + for nvme in "${!nvme_files[@]}" 00:01:14.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:14.774 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:14.774 + for nvme in "${!nvme_files[@]}" 00:01:14.774 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:14.774 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.035 + for nvme in "${!nvme_files[@]}" 00:01:15.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:15.035 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:15.035 + for nvme in "${!nvme_files[@]}" 00:01:15.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-fdp.img -s 1G 00:01:15.035 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:15.035 + for nvme in "${!nvme_files[@]}" 00:01:15.035 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:15.035 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:15.035 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:15.035 + echo 'End stage prepare_nvme.sh' 00:01:15.035 End stage prepare_nvme.sh 00:01:15.049 [Pipeline] sh 00:01:15.337 + DISTRO=fedora39 00:01:15.337 + CPUS=10 00:01:15.337 + RAM=12288 00:01:15.337 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:15.337 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex6-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex6-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:15.337 00:01:15.337 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:01:15.337 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:01:15.337 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:01:15.337 HELP=0 00:01:15.337 DRY_RUN=0 00:01:15.337 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,/var/lib/libvirt/images/backends/ex6-nvme-fdp.img, 00:01:15.337 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:15.337 NVME_AUTO_CREATE=0 00:01:15.337 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,, 00:01:15.337 NVME_CMB=,,,, 00:01:15.337 NVME_PMR=,,,, 00:01:15.337 NVME_ZNS=,,,, 00:01:15.337 NVME_MS=true,,,, 00:01:15.337 NVME_FDP=,,,on, 00:01:15.337 SPDK_VAGRANT_DISTRO=fedora39 00:01:15.337 SPDK_VAGRANT_VMCPU=10 00:01:15.337 SPDK_VAGRANT_VMRAM=12288 00:01:15.337 SPDK_VAGRANT_PROVIDER=libvirt 00:01:15.337 SPDK_VAGRANT_HTTP_PROXY= 00:01:15.337 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:15.337 SPDK_OPENSTACK_NETWORK=0 00:01:15.337 VAGRANT_PACKAGE_BOX=0 00:01:15.337 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:15.337 FORCE_DISTRO=true 00:01:15.337 VAGRANT_BOX_VERSION= 00:01:15.337 EXTRA_VAGRANTFILES= 00:01:15.337 NIC_MODEL=e1000 00:01:15.337 00:01:15.337 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:01:15.337 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:17.881 Bringing machine 'default' up with 'libvirt' provider... 00:01:18.141 ==> default: Creating image (snapshot of base box volume). 00:01:18.403 ==> default: Creating domain with the following settings... 00:01:18.403 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732092896_d7c23e1400b9c9d184cf 00:01:18.403 ==> default: -- Domain type: kvm 00:01:18.403 ==> default: -- Cpus: 10 00:01:18.403 ==> default: -- Feature: acpi 00:01:18.403 ==> default: -- Feature: apic 00:01:18.403 ==> default: -- Feature: pae 00:01:18.403 ==> default: -- Memory: 12288M 00:01:18.403 ==> default: -- Memory Backing: hugepages: 00:01:18.403 ==> default: -- Management MAC: 00:01:18.403 ==> default: -- Loader: 00:01:18.403 ==> default: -- Nvram: 00:01:18.403 ==> default: -- Base box: spdk/fedora39 00:01:18.403 ==> default: -- Storage pool: default 00:01:18.403 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732092896_d7c23e1400b9c9d184cf.img (20G) 00:01:18.403 ==> default: -- Volume Cache: default 00:01:18.403 ==> default: -- Kernel: 00:01:18.403 ==> default: -- Initrd: 00:01:18.403 ==> default: -- Graphics Type: vnc 00:01:18.403 ==> default: -- Graphics Port: -1 00:01:18.403 ==> default: -- Graphics IP: 127.0.0.1 00:01:18.403 ==> default: -- Graphics Password: Not defined 00:01:18.403 ==> default: -- Video Type: cirrus 00:01:18.403 ==> default: -- Video VRAM: 9216 00:01:18.403 ==> default: -- Sound Type: 00:01:18.403 ==> default: -- Keymap: en-us 00:01:18.403 ==> default: -- TPM Path: 00:01:18.403 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:18.403 ==> default: -- Command line args: 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:18.403 ==> default: -> value=-drive, 00:01:18.403 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:18.403 ==> default: -> value=-drive, 00:01:18.403 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-1-drive0, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:18.403 ==> default: -> value=-drive, 00:01:18.403 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.403 ==> default: -> value=-drive, 00:01:18.403 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.403 ==> default: -> value=-drive, 00:01:18.403 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:18.403 ==> default: -> value=-drive, 00:01:18.403 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:18.403 ==> default: -> value=-device, 00:01:18.403 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:18.664 ==> default: Creating shared folders metadata... 00:01:18.664 ==> default: Starting domain. 00:01:20.574 ==> default: Waiting for domain to get an IP address... 00:01:38.694 ==> default: Waiting for SSH to become available... 00:01:38.694 ==> default: Configuring and enabling network interfaces... 00:01:42.920 default: SSH address: 192.168.121.158:22 00:01:42.920 default: SSH username: vagrant 00:01:42.920 default: SSH auth method: private key 00:01:44.838 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:52.973 ==> default: Mounting SSHFS shared folder... 00:01:53.910 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:53.910 ==> default: Checking Mount.. 00:01:55.286 ==> default: Folder Successfully Mounted! 00:01:55.286 00:01:55.286 SUCCESS! 00:01:55.286 00:01:55.286 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:55.286 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:55.287 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:55.287 00:01:55.298 [Pipeline] } 00:01:55.314 [Pipeline] // stage 00:01:55.323 [Pipeline] dir 00:01:55.324 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:55.326 [Pipeline] { 00:01:55.340 [Pipeline] catchError 00:01:55.342 [Pipeline] { 00:01:55.354 [Pipeline] sh 00:01:55.631 + vagrant ssh-config --host vagrant 00:01:55.631 + sed -ne '/^Host/,$p' 00:01:55.631 + tee ssh_conf 00:01:58.174 Host vagrant 00:01:58.174 HostName 192.168.121.158 00:01:58.174 User vagrant 00:01:58.174 Port 22 00:01:58.174 UserKnownHostsFile /dev/null 00:01:58.174 StrictHostKeyChecking no 00:01:58.174 PasswordAuthentication no 00:01:58.174 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:58.174 IdentitiesOnly yes 00:01:58.174 LogLevel FATAL 00:01:58.174 ForwardAgent yes 00:01:58.174 ForwardX11 yes 00:01:58.174 00:01:58.189 [Pipeline] withEnv 00:01:58.192 [Pipeline] { 00:01:58.210 [Pipeline] sh 00:01:58.491 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:58.491 source /etc/os-release 00:01:58.491 [[ -e /image.version ]] && img=$(< /image.version) 00:01:58.491 # Minimal, systemd-like check. 00:01:58.491 if [[ -e /.dockerenv ]]; then 00:01:58.491 # Clear garbage from the node'\''s name: 00:01:58.491 # agt-er_autotest_547-896 -> autotest_547-896 00:01:58.491 # $HOSTNAME is the actual container id 00:01:58.491 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:58.491 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:58.491 # We can assume this is a mount from a host where container is running, 00:01:58.491 # so fetch its hostname to easily identify the target swarm worker. 00:01:58.491 container="$(< /etc/hostname) ($agent)" 00:01:58.491 else 00:01:58.491 # Fallback 00:01:58.491 container=$agent 00:01:58.491 fi 00:01:58.491 fi 00:01:58.491 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:58.491 ' 00:01:58.502 [Pipeline] } 00:01:58.518 [Pipeline] // withEnv 00:01:58.526 [Pipeline] setCustomBuildProperty 00:01:58.543 [Pipeline] stage 00:01:58.545 [Pipeline] { (Tests) 00:01:58.565 [Pipeline] sh 00:01:58.852 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:58.874 [Pipeline] sh 00:01:59.157 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:59.435 [Pipeline] timeout 00:01:59.436 Timeout set to expire in 50 min 00:01:59.438 [Pipeline] { 00:01:59.454 [Pipeline] sh 00:01:59.739 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:00.309 HEAD is now at 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:02:00.321 [Pipeline] sh 00:02:00.679 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:00.694 [Pipeline] sh 00:02:00.979 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:01.256 [Pipeline] sh 00:02:01.539 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:01.800 ++ readlink -f spdk_repo 00:02:01.800 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:01.800 + [[ -n /home/vagrant/spdk_repo ]] 00:02:01.800 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:01.800 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:01.800 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:01.800 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:01.800 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:01.800 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:01.800 + cd /home/vagrant/spdk_repo 00:02:01.800 + source /etc/os-release 00:02:01.800 ++ NAME='Fedora Linux' 00:02:01.800 ++ VERSION='39 (Cloud Edition)' 00:02:01.800 ++ ID=fedora 00:02:01.800 ++ VERSION_ID=39 00:02:01.800 ++ VERSION_CODENAME= 00:02:01.800 ++ PLATFORM_ID=platform:f39 00:02:01.800 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:01.800 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:01.800 ++ LOGO=fedora-logo-icon 00:02:01.800 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:01.800 ++ HOME_URL=https://fedoraproject.org/ 00:02:01.800 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:01.800 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:01.800 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:01.800 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:01.800 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:01.800 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:01.800 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:01.800 ++ SUPPORT_END=2024-11-12 00:02:01.800 ++ VARIANT='Cloud Edition' 00:02:01.800 ++ VARIANT_ID=cloud 00:02:01.800 + uname -a 00:02:01.800 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:01.800 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:02.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:02.319 Hugepages 00:02:02.319 node hugesize free / total 00:02:02.319 node0 1048576kB 0 / 0 00:02:02.319 node0 2048kB 0 / 0 00:02:02.319 00:02:02.319 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:02.319 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:02.319 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:02.319 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:02.319 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:02.319 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:02.319 + rm -f /tmp/spdk-ld-path 00:02:02.319 + source autorun-spdk.conf 00:02:02.319 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.319 ++ SPDK_TEST_NVME=1 00:02:02.319 ++ SPDK_TEST_FTL=1 00:02:02.319 ++ SPDK_TEST_ISAL=1 00:02:02.319 ++ SPDK_RUN_ASAN=1 00:02:02.319 ++ SPDK_RUN_UBSAN=1 00:02:02.319 ++ SPDK_TEST_XNVME=1 00:02:02.319 ++ SPDK_TEST_NVME_FDP=1 00:02:02.319 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.319 ++ RUN_NIGHTLY=0 00:02:02.319 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:02.319 + [[ -n '' ]] 00:02:02.319 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:02.319 + for M in /var/spdk/build-*-manifest.txt 00:02:02.319 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:02.319 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.319 + for M in /var/spdk/build-*-manifest.txt 00:02:02.319 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:02.319 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.319 + for M in /var/spdk/build-*-manifest.txt 00:02:02.319 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:02.319 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:02.580 ++ uname 00:02:02.580 + [[ Linux == \L\i\n\u\x ]] 00:02:02.580 + sudo dmesg -T 00:02:02.580 + sudo dmesg --clear 00:02:02.580 + dmesg_pid=5023 00:02:02.580 + [[ Fedora Linux == FreeBSD ]] 00:02:02.580 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.580 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:02.580 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:02.580 + [[ -x /usr/src/fio-static/fio ]] 00:02:02.580 + sudo dmesg -Tw 00:02:02.580 + export FIO_BIN=/usr/src/fio-static/fio 00:02:02.580 + FIO_BIN=/usr/src/fio-static/fio 00:02:02.580 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:02.580 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:02.580 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:02.580 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.580 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:02.580 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:02.580 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.580 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:02.580 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.580 08:55:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:02.580 08:55:41 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.580 08:55:41 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:02.580 08:55:41 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:02.580 08:55:41 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:02.580 08:55:41 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:02.580 08:55:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:02.580 08:55:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:02.580 08:55:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:02.580 08:55:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:02.580 08:55:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:02.580 08:55:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.580 08:55:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.580 08:55:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.580 08:55:41 -- paths/export.sh@5 -- $ export PATH 00:02:02.580 08:55:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:02.580 08:55:41 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:02.580 08:55:41 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:02.580 08:55:41 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732092941.XXXXXX 00:02:02.580 08:55:41 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732092941.8g2j5u 00:02:02.580 08:55:41 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:02.580 08:55:41 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:02.580 08:55:41 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:02.580 08:55:41 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:02.580 08:55:41 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:02.580 08:55:41 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:02.580 08:55:41 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:02.580 08:55:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.580 08:55:41 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:02.580 08:55:41 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:02.580 08:55:41 -- pm/common@17 -- $ local monitor 00:02:02.580 08:55:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.580 08:55:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:02.580 08:55:41 -- pm/common@25 -- $ sleep 1 00:02:02.580 08:55:41 -- pm/common@21 -- $ date +%s 00:02:02.580 08:55:41 -- pm/common@21 -- $ date +%s 00:02:02.580 08:55:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732092941 00:02:02.580 08:55:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732092941 00:02:02.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732092941_collect-cpu-load.pm.log 00:02:02.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732092941_collect-vmstat.pm.log 00:02:03.785 08:55:42 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:03.785 08:55:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:03.785 08:55:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:03.785 08:55:42 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:03.785 08:55:42 -- spdk/autobuild.sh@16 -- $ date -u 00:02:03.785 Wed Nov 20 08:55:42 AM UTC 2024 00:02:03.785 08:55:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:03.785 v25.01-pre-204-g4f0cbdcd1 00:02:03.785 08:55:42 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:03.785 08:55:42 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:03.785 08:55:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:03.785 08:55:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:03.785 08:55:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.785 ************************************ 00:02:03.785 START TEST asan 00:02:03.785 ************************************ 00:02:03.785 using asan 00:02:03.785 08:55:42 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:03.785 00:02:03.785 real 0m0.000s 00:02:03.785 user 0m0.000s 00:02:03.785 sys 0m0.000s 00:02:03.785 08:55:42 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:03.785 08:55:42 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.785 ************************************ 00:02:03.785 END TEST asan 00:02:03.785 ************************************ 00:02:03.785 08:55:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:03.785 08:55:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:03.785 08:55:42 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:03.785 08:55:42 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:03.785 08:55:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.785 ************************************ 00:02:03.785 START TEST ubsan 00:02:03.785 ************************************ 00:02:03.785 using ubsan 00:02:03.785 08:55:42 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:03.785 00:02:03.785 real 0m0.000s 00:02:03.785 user 0m0.000s 00:02:03.785 sys 0m0.000s 00:02:03.785 08:55:42 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:03.785 ************************************ 00:02:03.785 END TEST ubsan 00:02:03.785 ************************************ 00:02:03.785 08:55:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:03.785 08:55:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:03.785 08:55:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:03.785 08:55:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:03.785 08:55:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:03.785 08:55:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:03.785 08:55:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:03.785 08:55:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:03.785 08:55:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:03.786 08:55:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:04.047 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:04.047 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:04.308 Using 'verbs' RDMA provider 00:02:17.506 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:27.579 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:27.579 Creating mk/config.mk...done. 00:02:27.579 Creating mk/cc.flags.mk...done. 00:02:27.579 Type 'make' to build. 00:02:27.579 08:56:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:27.579 08:56:06 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:27.579 08:56:06 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:27.579 08:56:06 -- common/autotest_common.sh@10 -- $ set +x 00:02:27.579 ************************************ 00:02:27.579 START TEST make 00:02:27.579 ************************************ 00:02:27.579 08:56:06 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:27.579 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:27.579 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:27.579 meson setup builddir \ 00:02:27.579 -Dwith-libaio=enabled \ 00:02:27.579 -Dwith-liburing=enabled \ 00:02:27.579 -Dwith-libvfn=disabled \ 00:02:27.579 -Dwith-spdk=disabled \ 00:02:27.579 -Dexamples=false \ 00:02:27.579 -Dtests=false \ 00:02:27.579 -Dtools=false && \ 00:02:27.579 meson compile -C builddir && \ 00:02:27.579 cd -) 00:02:27.579 make[1]: Nothing to be done for 'all'. 00:02:30.129 The Meson build system 00:02:30.129 Version: 1.5.0 00:02:30.129 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:30.129 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:30.129 Build type: native build 00:02:30.129 Project name: xnvme 00:02:30.129 Project version: 0.7.5 00:02:30.129 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.129 C linker for the host machine: cc ld.bfd 2.40-14 00:02:30.129 Host machine cpu family: x86_64 00:02:30.129 Host machine cpu: x86_64 00:02:30.129 Message: host_machine.system: linux 00:02:30.129 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:30.129 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:30.129 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:30.129 Run-time dependency threads found: YES 00:02:30.129 Has header "setupapi.h" : NO 00:02:30.129 Has header "linux/blkzoned.h" : YES 00:02:30.129 Has header "linux/blkzoned.h" : YES (cached) 00:02:30.129 Has header "libaio.h" : YES 00:02:30.129 Library aio found: YES 00:02:30.129 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.129 Run-time dependency liburing found: YES 2.2 00:02:30.129 Dependency libvfn skipped: feature with-libvfn disabled 00:02:30.129 Found CMake: /usr/bin/cmake (3.27.7) 00:02:30.129 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:30.129 Subproject spdk : skipped: feature with-spdk disabled 00:02:30.129 Run-time dependency appleframeworks found: NO (tried framework) 00:02:30.129 Run-time dependency appleframeworks found: NO (tried framework) 00:02:30.129 Library rt found: YES 00:02:30.129 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:30.129 Configuring xnvme_config.h using configuration 00:02:30.129 Configuring xnvme.spec using configuration 00:02:30.129 Run-time dependency bash-completion found: YES 2.11 00:02:30.129 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:30.129 Program cp found: YES (/usr/bin/cp) 00:02:30.129 Build targets in project: 3 00:02:30.129 00:02:30.129 xnvme 0.7.5 00:02:30.129 00:02:30.129 Subprojects 00:02:30.129 spdk : NO Feature 'with-spdk' disabled 00:02:30.129 00:02:30.129 User defined options 00:02:30.129 examples : false 00:02:30.129 tests : false 00:02:30.129 tools : false 00:02:30.129 with-libaio : enabled 00:02:30.129 with-liburing: enabled 00:02:30.129 with-libvfn : disabled 00:02:30.129 with-spdk : disabled 00:02:30.129 00:02:30.129 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.390 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:30.390 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:30.390 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:30.390 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:30.390 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:30.390 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:30.390 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:30.650 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:30.650 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:30.650 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:30.650 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:30.650 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:30.650 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:30.650 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:30.650 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:30.650 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:30.650 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:30.650 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:30.650 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:30.650 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:30.650 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:30.650 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:30.650 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:30.650 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:30.650 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:30.650 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:30.650 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:30.650 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:30.911 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:30.911 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:30.911 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:30.911 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:30.911 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:30.911 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:30.911 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:30.911 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:30.911 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:30.911 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:30.911 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:30.911 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:30.911 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:30.911 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:30.911 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:30.911 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:30.911 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:30.911 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:30.911 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:30.911 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:30.911 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:30.911 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:30.911 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:30.911 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:30.911 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:30.911 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:30.911 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:30.911 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:30.911 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:30.911 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:31.173 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:31.173 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:31.173 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:31.173 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:31.173 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:31.173 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:31.173 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:31.173 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:31.173 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:31.173 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:31.173 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:31.173 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:31.173 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:31.173 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:31.173 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:31.434 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:31.694 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:31.694 [75/76] Linking static target lib/libxnvme.a 00:02:31.694 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:31.694 INFO: autodetecting backend as ninja 00:02:31.694 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:31.694 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:38.257 The Meson build system 00:02:38.257 Version: 1.5.0 00:02:38.257 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:38.257 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:38.257 Build type: native build 00:02:38.257 Program cat found: YES (/usr/bin/cat) 00:02:38.257 Project name: DPDK 00:02:38.258 Project version: 24.03.0 00:02:38.258 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:38.258 C linker for the host machine: cc ld.bfd 2.40-14 00:02:38.258 Host machine cpu family: x86_64 00:02:38.258 Host machine cpu: x86_64 00:02:38.258 Message: ## Building in Developer Mode ## 00:02:38.258 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:38.258 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:38.258 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:38.258 Program python3 found: YES (/usr/bin/python3) 00:02:38.258 Program cat found: YES (/usr/bin/cat) 00:02:38.258 Compiler for C supports arguments -march=native: YES 00:02:38.258 Checking for size of "void *" : 8 00:02:38.258 Checking for size of "void *" : 8 (cached) 00:02:38.258 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:38.258 Library m found: YES 00:02:38.258 Library numa found: YES 00:02:38.258 Has header "numaif.h" : YES 00:02:38.258 Library fdt found: NO 00:02:38.258 Library execinfo found: NO 00:02:38.258 Has header "execinfo.h" : YES 00:02:38.258 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.258 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:38.258 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:38.258 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:38.258 Run-time dependency openssl found: YES 3.1.1 00:02:38.258 Run-time dependency libpcap found: YES 1.10.4 00:02:38.258 Has header "pcap.h" with dependency libpcap: YES 00:02:38.258 Compiler for C supports arguments -Wcast-qual: YES 00:02:38.258 Compiler for C supports arguments -Wdeprecated: YES 00:02:38.258 Compiler for C supports arguments -Wformat: YES 00:02:38.258 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:38.258 Compiler for C supports arguments -Wformat-security: NO 00:02:38.258 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.258 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:38.258 Compiler for C supports arguments -Wnested-externs: YES 00:02:38.258 Compiler for C supports arguments -Wold-style-definition: YES 00:02:38.258 Compiler for C supports arguments -Wpointer-arith: YES 00:02:38.258 Compiler for C supports arguments -Wsign-compare: YES 00:02:38.258 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:38.258 Compiler for C supports arguments -Wundef: YES 00:02:38.258 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.258 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:38.258 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:38.258 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.258 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:38.258 Program objdump found: YES (/usr/bin/objdump) 00:02:38.258 Compiler for C supports arguments -mavx512f: YES 00:02:38.258 Checking if "AVX512 checking" compiles: YES 00:02:38.258 Fetching value of define "__SSE4_2__" : 1 00:02:38.258 Fetching value of define "__AES__" : 1 00:02:38.258 Fetching value of define "__AVX__" : 1 00:02:38.258 Fetching value of define "__AVX2__" : 1 00:02:38.258 Fetching value of define "__AVX512BW__" : 1 00:02:38.258 Fetching value of define "__AVX512CD__" : 1 00:02:38.258 Fetching value of define "__AVX512DQ__" : 1 00:02:38.258 Fetching value of define "__AVX512F__" : 1 00:02:38.258 Fetching value of define "__AVX512VL__" : 1 00:02:38.258 Fetching value of define "__PCLMUL__" : 1 00:02:38.258 Fetching value of define "__RDRND__" : 1 00:02:38.258 Fetching value of define "__RDSEED__" : 1 00:02:38.258 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:38.258 Fetching value of define "__znver1__" : (undefined) 00:02:38.258 Fetching value of define "__znver2__" : (undefined) 00:02:38.258 Fetching value of define "__znver3__" : (undefined) 00:02:38.258 Fetching value of define "__znver4__" : (undefined) 00:02:38.258 Library asan found: YES 00:02:38.258 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:38.258 Message: lib/log: Defining dependency "log" 00:02:38.258 Message: lib/kvargs: Defining dependency "kvargs" 00:02:38.258 Message: lib/telemetry: Defining dependency "telemetry" 00:02:38.258 Library rt found: YES 00:02:38.258 Checking for function "getentropy" : NO 00:02:38.258 Message: lib/eal: Defining dependency "eal" 00:02:38.258 Message: lib/ring: Defining dependency "ring" 00:02:38.258 Message: lib/rcu: Defining dependency "rcu" 00:02:38.258 Message: lib/mempool: Defining dependency "mempool" 00:02:38.258 Message: lib/mbuf: Defining dependency "mbuf" 00:02:38.258 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:38.258 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:38.258 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:38.258 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:38.258 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:38.258 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:38.258 Compiler for C supports arguments -mpclmul: YES 00:02:38.258 Compiler for C supports arguments -maes: YES 00:02:38.258 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.258 Compiler for C supports arguments -mavx512bw: YES 00:02:38.258 Compiler for C supports arguments -mavx512dq: YES 00:02:38.258 Compiler for C supports arguments -mavx512vl: YES 00:02:38.258 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:38.258 Compiler for C supports arguments -mavx2: YES 00:02:38.258 Compiler for C supports arguments -mavx: YES 00:02:38.258 Message: lib/net: Defining dependency "net" 00:02:38.258 Message: lib/meter: Defining dependency "meter" 00:02:38.258 Message: lib/ethdev: Defining dependency "ethdev" 00:02:38.258 Message: lib/pci: Defining dependency "pci" 00:02:38.258 Message: lib/cmdline: Defining dependency "cmdline" 00:02:38.258 Message: lib/hash: Defining dependency "hash" 00:02:38.258 Message: lib/timer: Defining dependency "timer" 00:02:38.258 Message: lib/compressdev: Defining dependency "compressdev" 00:02:38.258 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:38.258 Message: lib/dmadev: Defining dependency "dmadev" 00:02:38.258 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:38.258 Message: lib/power: Defining dependency "power" 00:02:38.258 Message: lib/reorder: Defining dependency "reorder" 00:02:38.258 Message: lib/security: Defining dependency "security" 00:02:38.258 Has header "linux/userfaultfd.h" : YES 00:02:38.258 Has header "linux/vduse.h" : YES 00:02:38.258 Message: lib/vhost: Defining dependency "vhost" 00:02:38.258 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:38.258 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.258 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:38.258 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:38.258 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:38.258 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:38.258 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:38.258 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:38.258 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:38.258 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:38.258 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:38.258 Configuring doxy-api-html.conf using configuration 00:02:38.258 Configuring doxy-api-man.conf using configuration 00:02:38.258 Program mandb found: YES (/usr/bin/mandb) 00:02:38.258 Program sphinx-build found: NO 00:02:38.258 Configuring rte_build_config.h using configuration 00:02:38.258 Message: 00:02:38.258 ================= 00:02:38.258 Applications Enabled 00:02:38.258 ================= 00:02:38.258 00:02:38.258 apps: 00:02:38.258 00:02:38.258 00:02:38.258 Message: 00:02:38.258 ================= 00:02:38.258 Libraries Enabled 00:02:38.258 ================= 00:02:38.258 00:02:38.258 libs: 00:02:38.258 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:38.258 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:38.258 cryptodev, dmadev, power, reorder, security, vhost, 00:02:38.258 00:02:38.258 Message: 00:02:38.258 =============== 00:02:38.258 Drivers Enabled 00:02:38.258 =============== 00:02:38.258 00:02:38.258 common: 00:02:38.258 00:02:38.258 bus: 00:02:38.258 pci, vdev, 00:02:38.258 mempool: 00:02:38.258 ring, 00:02:38.258 dma: 00:02:38.258 00:02:38.258 net: 00:02:38.258 00:02:38.258 crypto: 00:02:38.258 00:02:38.258 compress: 00:02:38.258 00:02:38.258 vdpa: 00:02:38.258 00:02:38.258 00:02:38.258 Message: 00:02:38.258 ================= 00:02:38.258 Content Skipped 00:02:38.258 ================= 00:02:38.258 00:02:38.258 apps: 00:02:38.258 dumpcap: explicitly disabled via build config 00:02:38.258 graph: explicitly disabled via build config 00:02:38.258 pdump: explicitly disabled via build config 00:02:38.258 proc-info: explicitly disabled via build config 00:02:38.258 test-acl: explicitly disabled via build config 00:02:38.258 test-bbdev: explicitly disabled via build config 00:02:38.258 test-cmdline: explicitly disabled via build config 00:02:38.258 test-compress-perf: explicitly disabled via build config 00:02:38.258 test-crypto-perf: explicitly disabled via build config 00:02:38.258 test-dma-perf: explicitly disabled via build config 00:02:38.258 test-eventdev: explicitly disabled via build config 00:02:38.259 test-fib: explicitly disabled via build config 00:02:38.259 test-flow-perf: explicitly disabled via build config 00:02:38.259 test-gpudev: explicitly disabled via build config 00:02:38.259 test-mldev: explicitly disabled via build config 00:02:38.259 test-pipeline: explicitly disabled via build config 00:02:38.259 test-pmd: explicitly disabled via build config 00:02:38.259 test-regex: explicitly disabled via build config 00:02:38.259 test-sad: explicitly disabled via build config 00:02:38.259 test-security-perf: explicitly disabled via build config 00:02:38.259 00:02:38.259 libs: 00:02:38.259 argparse: explicitly disabled via build config 00:02:38.259 metrics: explicitly disabled via build config 00:02:38.259 acl: explicitly disabled via build config 00:02:38.259 bbdev: explicitly disabled via build config 00:02:38.259 bitratestats: explicitly disabled via build config 00:02:38.259 bpf: explicitly disabled via build config 00:02:38.259 cfgfile: explicitly disabled via build config 00:02:38.259 distributor: explicitly disabled via build config 00:02:38.259 efd: explicitly disabled via build config 00:02:38.259 eventdev: explicitly disabled via build config 00:02:38.259 dispatcher: explicitly disabled via build config 00:02:38.259 gpudev: explicitly disabled via build config 00:02:38.259 gro: explicitly disabled via build config 00:02:38.259 gso: explicitly disabled via build config 00:02:38.259 ip_frag: explicitly disabled via build config 00:02:38.259 jobstats: explicitly disabled via build config 00:02:38.259 latencystats: explicitly disabled via build config 00:02:38.259 lpm: explicitly disabled via build config 00:02:38.259 member: explicitly disabled via build config 00:02:38.259 pcapng: explicitly disabled via build config 00:02:38.259 rawdev: explicitly disabled via build config 00:02:38.259 regexdev: explicitly disabled via build config 00:02:38.259 mldev: explicitly disabled via build config 00:02:38.259 rib: explicitly disabled via build config 00:02:38.259 sched: explicitly disabled via build config 00:02:38.259 stack: explicitly disabled via build config 00:02:38.259 ipsec: explicitly disabled via build config 00:02:38.259 pdcp: explicitly disabled via build config 00:02:38.259 fib: explicitly disabled via build config 00:02:38.259 port: explicitly disabled via build config 00:02:38.259 pdump: explicitly disabled via build config 00:02:38.259 table: explicitly disabled via build config 00:02:38.259 pipeline: explicitly disabled via build config 00:02:38.259 graph: explicitly disabled via build config 00:02:38.259 node: explicitly disabled via build config 00:02:38.259 00:02:38.259 drivers: 00:02:38.259 common/cpt: not in enabled drivers build config 00:02:38.259 common/dpaax: not in enabled drivers build config 00:02:38.259 common/iavf: not in enabled drivers build config 00:02:38.259 common/idpf: not in enabled drivers build config 00:02:38.259 common/ionic: not in enabled drivers build config 00:02:38.259 common/mvep: not in enabled drivers build config 00:02:38.259 common/octeontx: not in enabled drivers build config 00:02:38.259 bus/auxiliary: not in enabled drivers build config 00:02:38.259 bus/cdx: not in enabled drivers build config 00:02:38.259 bus/dpaa: not in enabled drivers build config 00:02:38.259 bus/fslmc: not in enabled drivers build config 00:02:38.259 bus/ifpga: not in enabled drivers build config 00:02:38.259 bus/platform: not in enabled drivers build config 00:02:38.259 bus/uacce: not in enabled drivers build config 00:02:38.259 bus/vmbus: not in enabled drivers build config 00:02:38.259 common/cnxk: not in enabled drivers build config 00:02:38.259 common/mlx5: not in enabled drivers build config 00:02:38.259 common/nfp: not in enabled drivers build config 00:02:38.259 common/nitrox: not in enabled drivers build config 00:02:38.259 common/qat: not in enabled drivers build config 00:02:38.259 common/sfc_efx: not in enabled drivers build config 00:02:38.259 mempool/bucket: not in enabled drivers build config 00:02:38.259 mempool/cnxk: not in enabled drivers build config 00:02:38.259 mempool/dpaa: not in enabled drivers build config 00:02:38.259 mempool/dpaa2: not in enabled drivers build config 00:02:38.259 mempool/octeontx: not in enabled drivers build config 00:02:38.259 mempool/stack: not in enabled drivers build config 00:02:38.259 dma/cnxk: not in enabled drivers build config 00:02:38.259 dma/dpaa: not in enabled drivers build config 00:02:38.259 dma/dpaa2: not in enabled drivers build config 00:02:38.259 dma/hisilicon: not in enabled drivers build config 00:02:38.259 dma/idxd: not in enabled drivers build config 00:02:38.259 dma/ioat: not in enabled drivers build config 00:02:38.259 dma/skeleton: not in enabled drivers build config 00:02:38.259 net/af_packet: not in enabled drivers build config 00:02:38.259 net/af_xdp: not in enabled drivers build config 00:02:38.259 net/ark: not in enabled drivers build config 00:02:38.259 net/atlantic: not in enabled drivers build config 00:02:38.259 net/avp: not in enabled drivers build config 00:02:38.259 net/axgbe: not in enabled drivers build config 00:02:38.259 net/bnx2x: not in enabled drivers build config 00:02:38.259 net/bnxt: not in enabled drivers build config 00:02:38.259 net/bonding: not in enabled drivers build config 00:02:38.259 net/cnxk: not in enabled drivers build config 00:02:38.259 net/cpfl: not in enabled drivers build config 00:02:38.259 net/cxgbe: not in enabled drivers build config 00:02:38.259 net/dpaa: not in enabled drivers build config 00:02:38.259 net/dpaa2: not in enabled drivers build config 00:02:38.259 net/e1000: not in enabled drivers build config 00:02:38.259 net/ena: not in enabled drivers build config 00:02:38.259 net/enetc: not in enabled drivers build config 00:02:38.259 net/enetfec: not in enabled drivers build config 00:02:38.259 net/enic: not in enabled drivers build config 00:02:38.259 net/failsafe: not in enabled drivers build config 00:02:38.259 net/fm10k: not in enabled drivers build config 00:02:38.259 net/gve: not in enabled drivers build config 00:02:38.259 net/hinic: not in enabled drivers build config 00:02:38.259 net/hns3: not in enabled drivers build config 00:02:38.259 net/i40e: not in enabled drivers build config 00:02:38.259 net/iavf: not in enabled drivers build config 00:02:38.259 net/ice: not in enabled drivers build config 00:02:38.259 net/idpf: not in enabled drivers build config 00:02:38.259 net/igc: not in enabled drivers build config 00:02:38.259 net/ionic: not in enabled drivers build config 00:02:38.259 net/ipn3ke: not in enabled drivers build config 00:02:38.259 net/ixgbe: not in enabled drivers build config 00:02:38.259 net/mana: not in enabled drivers build config 00:02:38.259 net/memif: not in enabled drivers build config 00:02:38.259 net/mlx4: not in enabled drivers build config 00:02:38.259 net/mlx5: not in enabled drivers build config 00:02:38.259 net/mvneta: not in enabled drivers build config 00:02:38.259 net/mvpp2: not in enabled drivers build config 00:02:38.259 net/netvsc: not in enabled drivers build config 00:02:38.259 net/nfb: not in enabled drivers build config 00:02:38.259 net/nfp: not in enabled drivers build config 00:02:38.259 net/ngbe: not in enabled drivers build config 00:02:38.259 net/null: not in enabled drivers build config 00:02:38.259 net/octeontx: not in enabled drivers build config 00:02:38.259 net/octeon_ep: not in enabled drivers build config 00:02:38.259 net/pcap: not in enabled drivers build config 00:02:38.259 net/pfe: not in enabled drivers build config 00:02:38.259 net/qede: not in enabled drivers build config 00:02:38.259 net/ring: not in enabled drivers build config 00:02:38.259 net/sfc: not in enabled drivers build config 00:02:38.259 net/softnic: not in enabled drivers build config 00:02:38.259 net/tap: not in enabled drivers build config 00:02:38.259 net/thunderx: not in enabled drivers build config 00:02:38.259 net/txgbe: not in enabled drivers build config 00:02:38.259 net/vdev_netvsc: not in enabled drivers build config 00:02:38.259 net/vhost: not in enabled drivers build config 00:02:38.259 net/virtio: not in enabled drivers build config 00:02:38.259 net/vmxnet3: not in enabled drivers build config 00:02:38.259 raw/*: missing internal dependency, "rawdev" 00:02:38.259 crypto/armv8: not in enabled drivers build config 00:02:38.259 crypto/bcmfs: not in enabled drivers build config 00:02:38.259 crypto/caam_jr: not in enabled drivers build config 00:02:38.259 crypto/ccp: not in enabled drivers build config 00:02:38.259 crypto/cnxk: not in enabled drivers build config 00:02:38.259 crypto/dpaa_sec: not in enabled drivers build config 00:02:38.259 crypto/dpaa2_sec: not in enabled drivers build config 00:02:38.259 crypto/ipsec_mb: not in enabled drivers build config 00:02:38.259 crypto/mlx5: not in enabled drivers build config 00:02:38.259 crypto/mvsam: not in enabled drivers build config 00:02:38.259 crypto/nitrox: not in enabled drivers build config 00:02:38.259 crypto/null: not in enabled drivers build config 00:02:38.259 crypto/octeontx: not in enabled drivers build config 00:02:38.259 crypto/openssl: not in enabled drivers build config 00:02:38.259 crypto/scheduler: not in enabled drivers build config 00:02:38.259 crypto/uadk: not in enabled drivers build config 00:02:38.259 crypto/virtio: not in enabled drivers build config 00:02:38.259 compress/isal: not in enabled drivers build config 00:02:38.259 compress/mlx5: not in enabled drivers build config 00:02:38.259 compress/nitrox: not in enabled drivers build config 00:02:38.259 compress/octeontx: not in enabled drivers build config 00:02:38.259 compress/zlib: not in enabled drivers build config 00:02:38.259 regex/*: missing internal dependency, "regexdev" 00:02:38.259 ml/*: missing internal dependency, "mldev" 00:02:38.259 vdpa/ifc: not in enabled drivers build config 00:02:38.259 vdpa/mlx5: not in enabled drivers build config 00:02:38.259 vdpa/nfp: not in enabled drivers build config 00:02:38.259 vdpa/sfc: not in enabled drivers build config 00:02:38.259 event/*: missing internal dependency, "eventdev" 00:02:38.259 baseband/*: missing internal dependency, "bbdev" 00:02:38.259 gpu/*: missing internal dependency, "gpudev" 00:02:38.259 00:02:38.259 00:02:38.259 Build targets in project: 84 00:02:38.259 00:02:38.259 DPDK 24.03.0 00:02:38.259 00:02:38.259 User defined options 00:02:38.259 buildtype : debug 00:02:38.259 default_library : shared 00:02:38.259 libdir : lib 00:02:38.259 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:38.259 b_sanitize : address 00:02:38.259 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:38.259 c_link_args : 00:02:38.259 cpu_instruction_set: native 00:02:38.260 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:38.260 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:38.260 enable_docs : false 00:02:38.260 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:38.260 enable_kmods : false 00:02:38.260 max_lcores : 128 00:02:38.260 tests : false 00:02:38.260 00:02:38.260 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.260 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:38.260 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:38.260 [2/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:38.260 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:38.260 [4/267] Linking static target lib/librte_log.a 00:02:38.260 [5/267] Linking static target lib/librte_kvargs.a 00:02:38.260 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:38.518 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:38.518 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:38.518 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:38.518 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:38.518 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:38.518 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:38.518 [13/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.518 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:38.776 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:38.776 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:38.776 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:38.776 [18/267] Linking static target lib/librte_telemetry.a 00:02:39.034 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.034 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.034 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.034 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.034 [23/267] Linking target lib/librte_log.so.24.1 00:02:39.034 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.034 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.034 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.034 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.292 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.292 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.292 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:39.292 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.292 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:39.292 [33/267] Linking target lib/librte_telemetry.so.24.1 00:02:39.292 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.556 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.556 [36/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:39.556 [37/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:39.556 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.556 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.556 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.556 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.556 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.556 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.556 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.556 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:39.556 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.823 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:39.823 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.081 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.081 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.081 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.081 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.081 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.081 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.081 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.081 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.081 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.339 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.339 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.339 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.339 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.339 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.339 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.339 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.597 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.597 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.597 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.597 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.597 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.597 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.855 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.855 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.855 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.855 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.855 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.855 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.855 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.856 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:41.114 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:41.114 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.114 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:41.114 [82/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:41.114 [83/267] Linking static target lib/librte_ring.a 00:02:41.373 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:41.373 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.373 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.373 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.373 [88/267] Linking static target lib/librte_eal.a 00:02:41.373 [89/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.373 [90/267] Linking static target lib/librte_rcu.a 00:02:41.631 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.631 [92/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.632 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.632 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.632 [95/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.632 [96/267] Linking static target lib/librte_mempool.a 00:02:41.890 [97/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.890 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.890 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.890 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.890 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:42.148 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.148 [103/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.148 [104/267] Linking static target lib/librte_meter.a 00:02:42.148 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.406 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.406 [107/267] Linking static target lib/librte_mbuf.a 00:02:42.406 [108/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.406 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:42.406 [110/267] Linking static target lib/librte_net.a 00:02:42.406 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.406 [112/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.664 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.664 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.664 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.664 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.664 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.923 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.923 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:42.923 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:43.182 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.182 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.182 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.182 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.182 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.182 [126/267] Linking static target lib/librte_pci.a 00:02:43.182 [127/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.441 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.441 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.441 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.441 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.441 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.441 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.441 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.441 [135/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.441 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.441 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:43.441 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.441 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.700 [140/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:43.700 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.700 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.700 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.700 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:43.959 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:43.959 [146/267] Linking static target lib/librte_cmdline.a 00:02:43.959 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:43.959 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:43.959 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:43.959 [150/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:43.959 [151/267] Linking static target lib/librte_timer.a 00:02:44.217 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.217 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.217 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.217 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.217 [156/267] Linking static target lib/librte_ethdev.a 00:02:44.476 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.476 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.476 [159/267] Linking static target lib/librte_hash.a 00:02:44.476 [160/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.476 [161/267] Linking static target lib/librte_compressdev.a 00:02:44.476 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.476 [163/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.476 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.734 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.734 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.734 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.734 [168/267] Linking static target lib/librte_dmadev.a 00:02:44.993 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.993 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.993 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.993 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.993 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.993 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.250 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:45.250 [176/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.250 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:45.250 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:45.250 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:45.250 [180/267] Linking static target lib/librte_cryptodev.a 00:02:45.508 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:45.508 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:45.508 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.508 [184/267] Linking static target lib/librte_power.a 00:02:45.508 [185/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.766 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:45.766 [187/267] Linking static target lib/librte_reorder.a 00:02:45.766 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:45.766 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:45.766 [190/267] Linking static target lib/librte_security.a 00:02:45.766 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:45.766 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:46.025 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:46.025 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.295 [195/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.295 [196/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.295 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:46.575 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:46.575 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:46.575 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:46.575 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:46.833 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:46.833 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:46.833 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:46.833 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:46.833 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:46.833 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:47.091 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:47.091 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:47.091 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:47.091 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:47.091 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.091 [213/267] Linking static target drivers/librte_bus_vdev.a 00:02:47.091 [214/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.091 [215/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:47.091 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.091 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:47.091 [218/267] Linking static target drivers/librte_bus_pci.a 00:02:47.091 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:47.091 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:47.350 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:47.350 [222/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.350 [223/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.350 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:47.350 [225/267] Linking static target drivers/librte_mempool_ring.a 00:02:47.609 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.175 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:48.740 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.740 [229/267] Linking target lib/librte_eal.so.24.1 00:02:48.740 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:48.740 [231/267] Linking target lib/librte_ring.so.24.1 00:02:48.740 [232/267] Linking target lib/librte_pci.so.24.1 00:02:48.740 [233/267] Linking target lib/librte_meter.so.24.1 00:02:48.740 [234/267] Linking target lib/librte_dmadev.so.24.1 00:02:48.740 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:48.998 [236/267] Linking target lib/librte_timer.so.24.1 00:02:48.998 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:48.998 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:48.998 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:48.998 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:48.998 [241/267] Linking target lib/librte_mempool.so.24.1 00:02:48.998 [242/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:48.998 [243/267] Linking target lib/librte_rcu.so.24.1 00:02:48.998 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:48.998 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:48.998 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:49.256 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:49.256 [248/267] Linking target lib/librte_mbuf.so.24.1 00:02:49.256 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:49.256 [250/267] Linking target lib/librte_compressdev.so.24.1 00:02:49.256 [251/267] Linking target lib/librte_net.so.24.1 00:02:49.256 [252/267] Linking target lib/librte_reorder.so.24.1 00:02:49.256 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:02:49.256 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:49.512 [255/267] Linking target lib/librte_cmdline.so.24.1 00:02:49.512 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:49.512 [257/267] Linking target lib/librte_hash.so.24.1 00:02:49.512 [258/267] Linking target lib/librte_security.so.24.1 00:02:49.512 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:49.769 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.769 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:49.769 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:50.026 [263/267] Linking target lib/librte_power.so.24.1 00:02:50.590 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:50.591 [265/267] Linking static target lib/librte_vhost.a 00:02:51.960 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.960 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:51.960 INFO: autodetecting backend as ninja 00:02:51.960 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:06.863 CC lib/ut/ut.o 00:03:06.863 CC lib/log/log_flags.o 00:03:06.863 CC lib/log/log.o 00:03:06.863 CC lib/log/log_deprecated.o 00:03:06.863 CC lib/ut_mock/mock.o 00:03:06.863 LIB libspdk_ut.a 00:03:06.863 LIB libspdk_log.a 00:03:06.863 LIB libspdk_ut_mock.a 00:03:06.863 SO libspdk_ut.so.2.0 00:03:06.863 SO libspdk_ut_mock.so.6.0 00:03:06.863 SO libspdk_log.so.7.1 00:03:06.863 SYMLINK libspdk_ut.so 00:03:06.863 SYMLINK libspdk_ut_mock.so 00:03:06.863 SYMLINK libspdk_log.so 00:03:06.863 CC lib/util/base64.o 00:03:06.863 CC lib/util/bit_array.o 00:03:06.863 CC lib/util/cpuset.o 00:03:06.863 CC lib/util/crc16.o 00:03:06.863 CC lib/util/crc32.o 00:03:06.863 CC lib/ioat/ioat.o 00:03:06.863 CC lib/dma/dma.o 00:03:06.863 CC lib/util/crc32c.o 00:03:06.863 CXX lib/trace_parser/trace.o 00:03:06.863 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.863 CC lib/util/crc32_ieee.o 00:03:06.863 CC lib/vfio_user/host/vfio_user.o 00:03:06.863 CC lib/util/crc64.o 00:03:06.863 CC lib/util/dif.o 00:03:06.863 LIB libspdk_dma.a 00:03:06.863 SO libspdk_dma.so.5.0 00:03:06.863 CC lib/util/fd.o 00:03:06.863 CC lib/util/fd_group.o 00:03:06.863 CC lib/util/file.o 00:03:06.863 SYMLINK libspdk_dma.so 00:03:06.863 CC lib/util/hexlify.o 00:03:06.863 LIB libspdk_ioat.a 00:03:06.863 CC lib/util/iov.o 00:03:06.863 SO libspdk_ioat.so.7.0 00:03:06.863 CC lib/util/math.o 00:03:06.863 CC lib/util/net.o 00:03:06.863 LIB libspdk_vfio_user.a 00:03:06.863 SYMLINK libspdk_ioat.so 00:03:06.863 CC lib/util/pipe.o 00:03:06.863 SO libspdk_vfio_user.so.5.0 00:03:06.863 CC lib/util/strerror_tls.o 00:03:06.863 CC lib/util/string.o 00:03:07.121 SYMLINK libspdk_vfio_user.so 00:03:07.121 CC lib/util/uuid.o 00:03:07.121 CC lib/util/xor.o 00:03:07.121 CC lib/util/zipf.o 00:03:07.121 CC lib/util/md5.o 00:03:07.378 LIB libspdk_util.a 00:03:07.378 LIB libspdk_trace_parser.a 00:03:07.378 SO libspdk_trace_parser.so.6.0 00:03:07.378 SO libspdk_util.so.10.1 00:03:07.378 SYMLINK libspdk_trace_parser.so 00:03:07.635 SYMLINK libspdk_util.so 00:03:07.635 CC lib/rdma_utils/rdma_utils.o 00:03:07.635 CC lib/vmd/vmd.o 00:03:07.635 CC lib/idxd/idxd.o 00:03:07.635 CC lib/vmd/led.o 00:03:07.635 CC lib/json/json_parse.o 00:03:07.635 CC lib/json/json_util.o 00:03:07.635 CC lib/idxd/idxd_user.o 00:03:07.635 CC lib/json/json_write.o 00:03:07.635 CC lib/conf/conf.o 00:03:07.635 CC lib/env_dpdk/env.o 00:03:07.894 CC lib/idxd/idxd_kernel.o 00:03:07.894 LIB libspdk_rdma_utils.a 00:03:07.894 LIB libspdk_conf.a 00:03:07.894 CC lib/env_dpdk/memory.o 00:03:07.894 CC lib/env_dpdk/pci.o 00:03:07.894 SO libspdk_rdma_utils.so.1.0 00:03:07.894 SO libspdk_conf.so.6.0 00:03:07.894 CC lib/env_dpdk/init.o 00:03:07.894 CC lib/env_dpdk/threads.o 00:03:07.894 SYMLINK libspdk_rdma_utils.so 00:03:07.894 LIB libspdk_json.a 00:03:07.894 CC lib/env_dpdk/pci_ioat.o 00:03:07.894 SYMLINK libspdk_conf.so 00:03:07.894 CC lib/env_dpdk/pci_virtio.o 00:03:07.894 SO libspdk_json.so.6.0 00:03:08.153 SYMLINK libspdk_json.so 00:03:08.153 CC lib/env_dpdk/pci_vmd.o 00:03:08.153 CC lib/env_dpdk/pci_idxd.o 00:03:08.153 CC lib/env_dpdk/pci_event.o 00:03:08.153 CC lib/env_dpdk/sigbus_handler.o 00:03:08.153 LIB libspdk_idxd.a 00:03:08.153 CC lib/rdma_provider/common.o 00:03:08.153 CC lib/env_dpdk/pci_dpdk.o 00:03:08.153 CC lib/jsonrpc/jsonrpc_server.o 00:03:08.414 SO libspdk_idxd.so.12.1 00:03:08.414 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.414 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.414 SYMLINK libspdk_idxd.so 00:03:08.414 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.414 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:08.414 LIB libspdk_vmd.a 00:03:08.414 SO libspdk_vmd.so.6.0 00:03:08.414 CC lib/jsonrpc/jsonrpc_client.o 00:03:08.414 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:08.414 SYMLINK libspdk_vmd.so 00:03:08.414 LIB libspdk_rdma_provider.a 00:03:08.414 SO libspdk_rdma_provider.so.7.0 00:03:08.673 SYMLINK libspdk_rdma_provider.so 00:03:08.674 LIB libspdk_jsonrpc.a 00:03:08.674 SO libspdk_jsonrpc.so.6.0 00:03:08.674 SYMLINK libspdk_jsonrpc.so 00:03:08.930 CC lib/rpc/rpc.o 00:03:09.188 LIB libspdk_rpc.a 00:03:09.188 LIB libspdk_env_dpdk.a 00:03:09.188 SO libspdk_rpc.so.6.0 00:03:09.188 SYMLINK libspdk_rpc.so 00:03:09.188 SO libspdk_env_dpdk.so.15.1 00:03:09.446 SYMLINK libspdk_env_dpdk.so 00:03:09.446 CC lib/notify/notify.o 00:03:09.446 CC lib/notify/notify_rpc.o 00:03:09.446 CC lib/keyring/keyring.o 00:03:09.446 CC lib/trace/trace.o 00:03:09.446 CC lib/keyring/keyring_rpc.o 00:03:09.446 CC lib/trace/trace_flags.o 00:03:09.446 CC lib/trace/trace_rpc.o 00:03:09.446 LIB libspdk_notify.a 00:03:09.446 SO libspdk_notify.so.6.0 00:03:09.446 LIB libspdk_keyring.a 00:03:09.446 SYMLINK libspdk_notify.so 00:03:09.446 SO libspdk_keyring.so.2.0 00:03:09.704 LIB libspdk_trace.a 00:03:09.704 SYMLINK libspdk_keyring.so 00:03:09.704 SO libspdk_trace.so.11.0 00:03:09.704 SYMLINK libspdk_trace.so 00:03:09.962 CC lib/thread/thread.o 00:03:09.962 CC lib/thread/iobuf.o 00:03:09.962 CC lib/sock/sock.o 00:03:09.962 CC lib/sock/sock_rpc.o 00:03:10.219 LIB libspdk_sock.a 00:03:10.219 SO libspdk_sock.so.10.0 00:03:10.219 SYMLINK libspdk_sock.so 00:03:10.476 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.476 CC lib/nvme/nvme_ctrlr.o 00:03:10.476 CC lib/nvme/nvme_ns.o 00:03:10.476 CC lib/nvme/nvme_fabric.o 00:03:10.476 CC lib/nvme/nvme_ns_cmd.o 00:03:10.476 CC lib/nvme/nvme_qpair.o 00:03:10.476 CC lib/nvme/nvme.o 00:03:10.476 CC lib/nvme/nvme_pcie.o 00:03:10.476 CC lib/nvme/nvme_pcie_common.o 00:03:11.039 CC lib/nvme/nvme_quirks.o 00:03:11.039 CC lib/nvme/nvme_transport.o 00:03:11.298 CC lib/nvme/nvme_discovery.o 00:03:11.298 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.298 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.298 CC lib/nvme/nvme_tcp.o 00:03:11.298 CC lib/nvme/nvme_opal.o 00:03:11.298 LIB libspdk_thread.a 00:03:11.556 SO libspdk_thread.so.11.0 00:03:11.556 SYMLINK libspdk_thread.so 00:03:11.556 CC lib/nvme/nvme_io_msg.o 00:03:11.556 CC lib/nvme/nvme_poll_group.o 00:03:11.556 CC lib/nvme/nvme_zns.o 00:03:11.556 CC lib/nvme/nvme_stubs.o 00:03:11.814 CC lib/nvme/nvme_auth.o 00:03:11.814 CC lib/accel/accel.o 00:03:11.814 CC lib/nvme/nvme_cuse.o 00:03:11.814 CC lib/nvme/nvme_rdma.o 00:03:12.072 CC lib/blob/blobstore.o 00:03:12.072 CC lib/blob/request.o 00:03:12.072 CC lib/blob/zeroes.o 00:03:12.072 CC lib/blob/blob_bs_dev.o 00:03:12.334 CC lib/accel/accel_rpc.o 00:03:12.334 CC lib/accel/accel_sw.o 00:03:12.335 CC lib/init/json_config.o 00:03:12.335 CC lib/init/subsystem.o 00:03:12.593 CC lib/virtio/virtio.o 00:03:12.593 CC lib/virtio/virtio_vhost_user.o 00:03:12.593 CC lib/fsdev/fsdev.o 00:03:12.593 CC lib/init/subsystem_rpc.o 00:03:12.593 CC lib/init/rpc.o 00:03:12.593 CC lib/virtio/virtio_vfio_user.o 00:03:12.852 CC lib/virtio/virtio_pci.o 00:03:12.852 CC lib/fsdev/fsdev_io.o 00:03:12.852 LIB libspdk_init.a 00:03:12.852 CC lib/fsdev/fsdev_rpc.o 00:03:12.852 SO libspdk_init.so.6.0 00:03:12.852 SYMLINK libspdk_init.so 00:03:12.852 LIB libspdk_accel.a 00:03:12.852 SO libspdk_accel.so.16.0 00:03:13.110 LIB libspdk_virtio.a 00:03:13.110 CC lib/event/app.o 00:03:13.110 CC lib/event/log_rpc.o 00:03:13.110 CC lib/event/app_rpc.o 00:03:13.110 CC lib/event/reactor.o 00:03:13.110 SO libspdk_virtio.so.7.0 00:03:13.110 SYMLINK libspdk_accel.so 00:03:13.110 CC lib/event/scheduler_static.o 00:03:13.110 SYMLINK libspdk_virtio.so 00:03:13.110 LIB libspdk_fsdev.a 00:03:13.110 SO libspdk_fsdev.so.2.0 00:03:13.110 CC lib/bdev/bdev_rpc.o 00:03:13.110 CC lib/bdev/bdev.o 00:03:13.110 CC lib/bdev/bdev_zone.o 00:03:13.110 CC lib/bdev/part.o 00:03:13.110 SYMLINK libspdk_fsdev.so 00:03:13.369 CC lib/bdev/scsi_nvme.o 00:03:13.369 LIB libspdk_nvme.a 00:03:13.369 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:13.369 SO libspdk_nvme.so.15.0 00:03:13.369 LIB libspdk_event.a 00:03:13.629 SO libspdk_event.so.14.0 00:03:13.629 SYMLINK libspdk_event.so 00:03:13.629 SYMLINK libspdk_nvme.so 00:03:14.196 LIB libspdk_fuse_dispatcher.a 00:03:14.196 SO libspdk_fuse_dispatcher.so.1.0 00:03:14.196 SYMLINK libspdk_fuse_dispatcher.so 00:03:14.763 LIB libspdk_blob.a 00:03:14.763 SO libspdk_blob.so.11.0 00:03:15.021 SYMLINK libspdk_blob.so 00:03:15.021 CC lib/blobfs/tree.o 00:03:15.021 CC lib/blobfs/blobfs.o 00:03:15.021 CC lib/lvol/lvol.o 00:03:15.956 LIB libspdk_blobfs.a 00:03:15.956 LIB libspdk_bdev.a 00:03:15.956 SO libspdk_blobfs.so.10.0 00:03:15.956 SO libspdk_bdev.so.17.0 00:03:15.956 SYMLINK libspdk_blobfs.so 00:03:15.956 SYMLINK libspdk_bdev.so 00:03:15.956 LIB libspdk_lvol.a 00:03:16.214 SO libspdk_lvol.so.10.0 00:03:16.214 SYMLINK libspdk_lvol.so 00:03:16.214 CC lib/scsi/dev.o 00:03:16.214 CC lib/scsi/lun.o 00:03:16.214 CC lib/ftl/ftl_core.o 00:03:16.214 CC lib/scsi/scsi.o 00:03:16.214 CC lib/ftl/ftl_init.o 00:03:16.214 CC lib/ftl/ftl_layout.o 00:03:16.214 CC lib/scsi/port.o 00:03:16.214 CC lib/ublk/ublk.o 00:03:16.214 CC lib/nbd/nbd.o 00:03:16.214 CC lib/nvmf/ctrlr.o 00:03:16.214 CC lib/nbd/nbd_rpc.o 00:03:16.471 CC lib/scsi/scsi_bdev.o 00:03:16.471 CC lib/scsi/scsi_pr.o 00:03:16.471 CC lib/scsi/scsi_rpc.o 00:03:16.471 CC lib/scsi/task.o 00:03:16.471 CC lib/nvmf/ctrlr_discovery.o 00:03:16.471 CC lib/ftl/ftl_debug.o 00:03:16.471 CC lib/ublk/ublk_rpc.o 00:03:16.471 CC lib/ftl/ftl_io.o 00:03:16.471 CC lib/ftl/ftl_sb.o 00:03:16.727 CC lib/ftl/ftl_l2p.o 00:03:16.727 LIB libspdk_nbd.a 00:03:16.727 CC lib/ftl/ftl_l2p_flat.o 00:03:16.727 SO libspdk_nbd.so.7.0 00:03:16.727 SYMLINK libspdk_nbd.so 00:03:16.727 CC lib/nvmf/ctrlr_bdev.o 00:03:16.727 CC lib/nvmf/subsystem.o 00:03:16.727 CC lib/ftl/ftl_nv_cache.o 00:03:16.727 CC lib/ftl/ftl_band.o 00:03:16.727 CC lib/ftl/ftl_band_ops.o 00:03:16.727 CC lib/ftl/ftl_writer.o 00:03:16.727 LIB libspdk_ublk.a 00:03:16.727 LIB libspdk_scsi.a 00:03:16.985 SO libspdk_ublk.so.3.0 00:03:16.985 SO libspdk_scsi.so.9.0 00:03:16.985 SYMLINK libspdk_ublk.so 00:03:16.985 CC lib/nvmf/nvmf.o 00:03:16.985 CC lib/nvmf/nvmf_rpc.o 00:03:16.985 SYMLINK libspdk_scsi.so 00:03:16.985 CC lib/nvmf/transport.o 00:03:16.985 CC lib/ftl/ftl_rq.o 00:03:17.241 CC lib/nvmf/tcp.o 00:03:17.241 CC lib/ftl/ftl_reloc.o 00:03:17.241 CC lib/iscsi/conn.o 00:03:17.499 CC lib/iscsi/init_grp.o 00:03:17.499 CC lib/ftl/ftl_l2p_cache.o 00:03:17.757 CC lib/ftl/ftl_p2l.o 00:03:17.757 CC lib/nvmf/stubs.o 00:03:17.757 CC lib/vhost/vhost.o 00:03:17.757 CC lib/vhost/vhost_rpc.o 00:03:17.757 CC lib/nvmf/mdns_server.o 00:03:17.757 CC lib/nvmf/rdma.o 00:03:18.015 CC lib/iscsi/iscsi.o 00:03:18.015 CC lib/nvmf/auth.o 00:03:18.015 CC lib/ftl/ftl_p2l_log.o 00:03:18.015 CC lib/ftl/mngt/ftl_mngt.o 00:03:18.273 CC lib/vhost/vhost_scsi.o 00:03:18.273 CC lib/vhost/vhost_blk.o 00:03:18.273 CC lib/vhost/rte_vhost_user.o 00:03:18.273 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:18.273 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:18.530 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:18.530 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:18.530 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:18.789 CC lib/iscsi/param.o 00:03:18.789 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:18.789 CC lib/iscsi/portal_grp.o 00:03:18.789 CC lib/iscsi/tgt_node.o 00:03:18.789 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:18.789 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:19.047 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:19.047 CC lib/iscsi/iscsi_subsystem.o 00:03:19.047 CC lib/iscsi/iscsi_rpc.o 00:03:19.047 CC lib/iscsi/task.o 00:03:19.047 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:19.047 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:19.047 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:19.047 CC lib/ftl/utils/ftl_conf.o 00:03:19.304 CC lib/ftl/utils/ftl_md.o 00:03:19.304 CC lib/ftl/utils/ftl_mempool.o 00:03:19.304 CC lib/ftl/utils/ftl_bitmap.o 00:03:19.304 CC lib/ftl/utils/ftl_property.o 00:03:19.304 LIB libspdk_vhost.a 00:03:19.304 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:19.304 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:19.304 SO libspdk_vhost.so.8.0 00:03:19.304 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:19.304 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:19.304 SYMLINK libspdk_vhost.so 00:03:19.304 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:19.562 LIB libspdk_iscsi.a 00:03:19.562 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:19.562 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:19.562 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:19.562 SO libspdk_iscsi.so.8.0 00:03:19.562 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:19.562 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:19.562 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:19.562 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:19.562 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:19.562 CC lib/ftl/base/ftl_base_dev.o 00:03:19.562 CC lib/ftl/base/ftl_base_bdev.o 00:03:19.562 SYMLINK libspdk_iscsi.so 00:03:19.562 CC lib/ftl/ftl_trace.o 00:03:19.818 LIB libspdk_nvmf.a 00:03:19.818 LIB libspdk_ftl.a 00:03:19.818 SO libspdk_nvmf.so.20.0 00:03:20.074 SO libspdk_ftl.so.9.0 00:03:20.074 SYMLINK libspdk_nvmf.so 00:03:20.333 SYMLINK libspdk_ftl.so 00:03:20.590 CC module/env_dpdk/env_dpdk_rpc.o 00:03:20.590 CC module/sock/posix/posix.o 00:03:20.590 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:20.590 CC module/scheduler/gscheduler/gscheduler.o 00:03:20.591 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:20.591 CC module/blob/bdev/blob_bdev.o 00:03:20.591 CC module/accel/error/accel_error.o 00:03:20.591 CC module/fsdev/aio/fsdev_aio.o 00:03:20.591 CC module/keyring/file/keyring.o 00:03:20.591 CC module/accel/ioat/accel_ioat.o 00:03:20.591 LIB libspdk_env_dpdk_rpc.a 00:03:20.591 SO libspdk_env_dpdk_rpc.so.6.0 00:03:20.591 LIB libspdk_scheduler_gscheduler.a 00:03:20.591 SO libspdk_scheduler_gscheduler.so.4.0 00:03:20.591 SYMLINK libspdk_env_dpdk_rpc.so 00:03:20.848 CC module/accel/error/accel_error_rpc.o 00:03:20.848 LIB libspdk_scheduler_dpdk_governor.a 00:03:20.848 CC module/keyring/file/keyring_rpc.o 00:03:20.848 CC module/accel/ioat/accel_ioat_rpc.o 00:03:20.848 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:20.848 SYMLINK libspdk_scheduler_gscheduler.so 00:03:20.848 LIB libspdk_scheduler_dynamic.a 00:03:20.848 SO libspdk_scheduler_dynamic.so.4.0 00:03:20.848 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:20.848 LIB libspdk_blob_bdev.a 00:03:20.848 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:20.848 SYMLINK libspdk_scheduler_dynamic.so 00:03:20.848 SO libspdk_blob_bdev.so.11.0 00:03:20.848 CC module/fsdev/aio/linux_aio_mgr.o 00:03:20.848 LIB libspdk_accel_error.a 00:03:20.848 LIB libspdk_accel_ioat.a 00:03:20.848 LIB libspdk_keyring_file.a 00:03:20.848 SO libspdk_keyring_file.so.2.0 00:03:20.848 SO libspdk_accel_error.so.2.0 00:03:20.848 SO libspdk_accel_ioat.so.6.0 00:03:20.848 SYMLINK libspdk_blob_bdev.so 00:03:20.848 CC module/accel/dsa/accel_dsa.o 00:03:20.848 CC module/accel/dsa/accel_dsa_rpc.o 00:03:20.848 CC module/accel/iaa/accel_iaa.o 00:03:20.848 SYMLINK libspdk_accel_error.so 00:03:20.848 SYMLINK libspdk_keyring_file.so 00:03:20.848 SYMLINK libspdk_accel_ioat.so 00:03:20.848 CC module/accel/iaa/accel_iaa_rpc.o 00:03:21.107 CC module/keyring/linux/keyring.o 00:03:21.107 LIB libspdk_accel_iaa.a 00:03:21.107 LIB libspdk_accel_dsa.a 00:03:21.107 SO libspdk_accel_iaa.so.3.0 00:03:21.107 CC module/bdev/gpt/gpt.o 00:03:21.107 CC module/bdev/error/vbdev_error.o 00:03:21.107 CC module/blobfs/bdev/blobfs_bdev.o 00:03:21.107 CC module/bdev/delay/vbdev_delay.o 00:03:21.107 SO libspdk_accel_dsa.so.5.0 00:03:21.107 LIB libspdk_fsdev_aio.a 00:03:21.107 CC module/bdev/lvol/vbdev_lvol.o 00:03:21.107 SYMLINK libspdk_accel_iaa.so 00:03:21.107 SO libspdk_fsdev_aio.so.1.0 00:03:21.107 CC module/keyring/linux/keyring_rpc.o 00:03:21.107 CC module/bdev/error/vbdev_error_rpc.o 00:03:21.365 SYMLINK libspdk_accel_dsa.so 00:03:21.365 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:21.365 CC module/bdev/gpt/vbdev_gpt.o 00:03:21.365 SYMLINK libspdk_fsdev_aio.so 00:03:21.365 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:21.365 LIB libspdk_keyring_linux.a 00:03:21.365 LIB libspdk_sock_posix.a 00:03:21.365 SO libspdk_keyring_linux.so.1.0 00:03:21.365 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:21.365 SO libspdk_sock_posix.so.6.0 00:03:21.365 LIB libspdk_blobfs_bdev.a 00:03:21.365 LIB libspdk_bdev_error.a 00:03:21.365 SO libspdk_blobfs_bdev.so.6.0 00:03:21.365 SO libspdk_bdev_error.so.6.0 00:03:21.365 SYMLINK libspdk_keyring_linux.so 00:03:21.365 SYMLINK libspdk_sock_posix.so 00:03:21.365 SYMLINK libspdk_bdev_error.so 00:03:21.365 CC module/bdev/malloc/bdev_malloc.o 00:03:21.365 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:21.365 SYMLINK libspdk_blobfs_bdev.so 00:03:21.622 LIB libspdk_bdev_gpt.a 00:03:21.622 LIB libspdk_bdev_delay.a 00:03:21.622 SO libspdk_bdev_gpt.so.6.0 00:03:21.622 SO libspdk_bdev_delay.so.6.0 00:03:21.622 CC module/bdev/nvme/bdev_nvme.o 00:03:21.622 CC module/bdev/null/bdev_null.o 00:03:21.622 CC module/bdev/passthru/vbdev_passthru.o 00:03:21.622 SYMLINK libspdk_bdev_gpt.so 00:03:21.622 SYMLINK libspdk_bdev_delay.so 00:03:21.622 CC module/bdev/raid/bdev_raid.o 00:03:21.622 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:21.622 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:21.622 CC module/bdev/split/vbdev_split.o 00:03:21.622 LIB libspdk_bdev_lvol.a 00:03:21.622 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:21.622 LIB libspdk_bdev_malloc.a 00:03:21.880 CC module/bdev/raid/bdev_raid_rpc.o 00:03:21.880 SO libspdk_bdev_lvol.so.6.0 00:03:21.880 SO libspdk_bdev_malloc.so.6.0 00:03:21.880 CC module/bdev/null/bdev_null_rpc.o 00:03:21.880 SYMLINK libspdk_bdev_lvol.so 00:03:21.880 SYMLINK libspdk_bdev_malloc.so 00:03:21.880 CC module/bdev/raid/bdev_raid_sb.o 00:03:21.880 CC module/bdev/split/vbdev_split_rpc.o 00:03:21.880 LIB libspdk_bdev_passthru.a 00:03:21.880 CC module/bdev/raid/raid0.o 00:03:21.880 SO libspdk_bdev_passthru.so.6.0 00:03:21.880 LIB libspdk_bdev_null.a 00:03:21.880 SYMLINK libspdk_bdev_passthru.so 00:03:21.880 SO libspdk_bdev_null.so.6.0 00:03:21.880 LIB libspdk_bdev_split.a 00:03:21.880 SO libspdk_bdev_split.so.6.0 00:03:21.880 SYMLINK libspdk_bdev_null.so 00:03:22.180 SYMLINK libspdk_bdev_split.so 00:03:22.180 CC module/bdev/raid/raid1.o 00:03:22.180 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:22.180 CC module/bdev/raid/concat.o 00:03:22.180 CC module/bdev/xnvme/bdev_xnvme.o 00:03:22.180 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:22.180 CC module/bdev/aio/bdev_aio.o 00:03:22.180 CC module/bdev/ftl/bdev_ftl.o 00:03:22.180 LIB libspdk_bdev_zone_block.a 00:03:22.180 SO libspdk_bdev_zone_block.so.6.0 00:03:22.180 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:22.180 CC module/bdev/nvme/nvme_rpc.o 00:03:22.180 CC module/bdev/aio/bdev_aio_rpc.o 00:03:22.180 SYMLINK libspdk_bdev_zone_block.so 00:03:22.437 LIB libspdk_bdev_xnvme.a 00:03:22.437 CC module/bdev/nvme/bdev_mdns_client.o 00:03:22.437 SO libspdk_bdev_xnvme.so.3.0 00:03:22.437 CC module/bdev/nvme/vbdev_opal.o 00:03:22.437 SYMLINK libspdk_bdev_xnvme.so 00:03:22.437 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:22.437 CC module/bdev/iscsi/bdev_iscsi.o 00:03:22.437 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:22.437 LIB libspdk_bdev_aio.a 00:03:22.437 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:22.437 LIB libspdk_bdev_ftl.a 00:03:22.437 LIB libspdk_bdev_raid.a 00:03:22.437 SO libspdk_bdev_ftl.so.6.0 00:03:22.437 SO libspdk_bdev_aio.so.6.0 00:03:22.437 SO libspdk_bdev_raid.so.6.0 00:03:22.437 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:22.437 SYMLINK libspdk_bdev_ftl.so 00:03:22.437 SYMLINK libspdk_bdev_aio.so 00:03:22.437 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:22.437 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:22.695 SYMLINK libspdk_bdev_raid.so 00:03:22.695 LIB libspdk_bdev_iscsi.a 00:03:22.695 SO libspdk_bdev_iscsi.so.6.0 00:03:22.695 SYMLINK libspdk_bdev_iscsi.so 00:03:22.951 LIB libspdk_bdev_virtio.a 00:03:22.951 SO libspdk_bdev_virtio.so.6.0 00:03:23.208 SYMLINK libspdk_bdev_virtio.so 00:03:24.141 LIB libspdk_bdev_nvme.a 00:03:24.141 SO libspdk_bdev_nvme.so.7.1 00:03:24.398 SYMLINK libspdk_bdev_nvme.so 00:03:24.654 CC module/event/subsystems/vmd/vmd.o 00:03:24.654 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:24.654 CC module/event/subsystems/keyring/keyring.o 00:03:24.654 CC module/event/subsystems/scheduler/scheduler.o 00:03:24.654 CC module/event/subsystems/sock/sock.o 00:03:24.654 CC module/event/subsystems/fsdev/fsdev.o 00:03:24.654 CC module/event/subsystems/iobuf/iobuf.o 00:03:24.654 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:24.654 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:24.912 LIB libspdk_event_keyring.a 00:03:24.912 LIB libspdk_event_scheduler.a 00:03:24.912 LIB libspdk_event_fsdev.a 00:03:24.912 LIB libspdk_event_vmd.a 00:03:24.912 LIB libspdk_event_sock.a 00:03:24.912 SO libspdk_event_keyring.so.1.0 00:03:24.912 LIB libspdk_event_iobuf.a 00:03:24.912 LIB libspdk_event_vhost_blk.a 00:03:24.912 SO libspdk_event_fsdev.so.1.0 00:03:24.912 SO libspdk_event_scheduler.so.4.0 00:03:24.912 SO libspdk_event_vmd.so.6.0 00:03:24.912 SO libspdk_event_sock.so.5.0 00:03:24.912 SO libspdk_event_vhost_blk.so.3.0 00:03:24.912 SO libspdk_event_iobuf.so.3.0 00:03:24.912 SYMLINK libspdk_event_keyring.so 00:03:24.912 SYMLINK libspdk_event_fsdev.so 00:03:24.912 SYMLINK libspdk_event_scheduler.so 00:03:24.912 SYMLINK libspdk_event_sock.so 00:03:24.912 SYMLINK libspdk_event_vhost_blk.so 00:03:24.912 SYMLINK libspdk_event_vmd.so 00:03:24.912 SYMLINK libspdk_event_iobuf.so 00:03:25.170 CC module/event/subsystems/accel/accel.o 00:03:25.170 LIB libspdk_event_accel.a 00:03:25.170 SO libspdk_event_accel.so.6.0 00:03:25.427 SYMLINK libspdk_event_accel.so 00:03:25.428 CC module/event/subsystems/bdev/bdev.o 00:03:25.686 LIB libspdk_event_bdev.a 00:03:25.686 SO libspdk_event_bdev.so.6.0 00:03:25.686 SYMLINK libspdk_event_bdev.so 00:03:25.944 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:25.944 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:25.944 CC module/event/subsystems/nbd/nbd.o 00:03:25.944 CC module/event/subsystems/ublk/ublk.o 00:03:25.944 CC module/event/subsystems/scsi/scsi.o 00:03:25.944 LIB libspdk_event_ublk.a 00:03:25.944 LIB libspdk_event_nbd.a 00:03:25.944 SO libspdk_event_ublk.so.3.0 00:03:25.944 LIB libspdk_event_scsi.a 00:03:25.944 SO libspdk_event_nbd.so.6.0 00:03:25.944 SO libspdk_event_scsi.so.6.0 00:03:25.944 SYMLINK libspdk_event_ublk.so 00:03:25.944 SYMLINK libspdk_event_nbd.so 00:03:25.944 SYMLINK libspdk_event_scsi.so 00:03:25.944 LIB libspdk_event_nvmf.a 00:03:26.202 SO libspdk_event_nvmf.so.6.0 00:03:26.202 SYMLINK libspdk_event_nvmf.so 00:03:26.202 CC module/event/subsystems/iscsi/iscsi.o 00:03:26.202 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:26.460 LIB libspdk_event_iscsi.a 00:03:26.460 LIB libspdk_event_vhost_scsi.a 00:03:26.460 SO libspdk_event_iscsi.so.6.0 00:03:26.460 SO libspdk_event_vhost_scsi.so.3.0 00:03:26.460 SYMLINK libspdk_event_iscsi.so 00:03:26.460 SYMLINK libspdk_event_vhost_scsi.so 00:03:26.460 SO libspdk.so.6.0 00:03:26.460 SYMLINK libspdk.so 00:03:26.720 CC test/rpc_client/rpc_client_test.o 00:03:26.720 TEST_HEADER include/spdk/accel.h 00:03:26.720 TEST_HEADER include/spdk/accel_module.h 00:03:26.720 CC app/trace_record/trace_record.o 00:03:26.720 TEST_HEADER include/spdk/assert.h 00:03:26.720 TEST_HEADER include/spdk/barrier.h 00:03:26.720 TEST_HEADER include/spdk/base64.h 00:03:26.720 TEST_HEADER include/spdk/bdev.h 00:03:26.720 TEST_HEADER include/spdk/bdev_module.h 00:03:26.720 TEST_HEADER include/spdk/bdev_zone.h 00:03:26.720 CXX app/trace/trace.o 00:03:26.720 TEST_HEADER include/spdk/bit_array.h 00:03:26.720 TEST_HEADER include/spdk/bit_pool.h 00:03:26.720 TEST_HEADER include/spdk/blob_bdev.h 00:03:26.720 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:26.720 TEST_HEADER include/spdk/blobfs.h 00:03:26.720 TEST_HEADER include/spdk/blob.h 00:03:26.720 TEST_HEADER include/spdk/conf.h 00:03:26.720 TEST_HEADER include/spdk/config.h 00:03:26.720 TEST_HEADER include/spdk/cpuset.h 00:03:26.720 TEST_HEADER include/spdk/crc16.h 00:03:26.720 TEST_HEADER include/spdk/crc32.h 00:03:26.720 TEST_HEADER include/spdk/crc64.h 00:03:26.720 TEST_HEADER include/spdk/dif.h 00:03:26.720 TEST_HEADER include/spdk/dma.h 00:03:26.720 TEST_HEADER include/spdk/endian.h 00:03:26.720 TEST_HEADER include/spdk/env_dpdk.h 00:03:26.720 TEST_HEADER include/spdk/env.h 00:03:26.720 TEST_HEADER include/spdk/event.h 00:03:26.720 CC app/nvmf_tgt/nvmf_main.o 00:03:26.720 TEST_HEADER include/spdk/fd_group.h 00:03:26.720 TEST_HEADER include/spdk/fd.h 00:03:26.720 TEST_HEADER include/spdk/file.h 00:03:26.720 TEST_HEADER include/spdk/fsdev.h 00:03:26.720 TEST_HEADER include/spdk/fsdev_module.h 00:03:26.720 TEST_HEADER include/spdk/ftl.h 00:03:26.720 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:26.720 CC test/thread/poller_perf/poller_perf.o 00:03:26.720 TEST_HEADER include/spdk/gpt_spec.h 00:03:26.720 TEST_HEADER include/spdk/hexlify.h 00:03:26.720 TEST_HEADER include/spdk/histogram_data.h 00:03:26.720 TEST_HEADER include/spdk/idxd.h 00:03:26.720 TEST_HEADER include/spdk/idxd_spec.h 00:03:26.720 CC examples/util/zipf/zipf.o 00:03:26.720 TEST_HEADER include/spdk/init.h 00:03:26.720 TEST_HEADER include/spdk/ioat.h 00:03:26.720 TEST_HEADER include/spdk/ioat_spec.h 00:03:26.720 TEST_HEADER include/spdk/iscsi_spec.h 00:03:26.720 TEST_HEADER include/spdk/json.h 00:03:26.720 TEST_HEADER include/spdk/jsonrpc.h 00:03:26.720 CC test/dma/test_dma/test_dma.o 00:03:26.720 TEST_HEADER include/spdk/keyring.h 00:03:26.720 TEST_HEADER include/spdk/keyring_module.h 00:03:26.720 TEST_HEADER include/spdk/likely.h 00:03:26.720 TEST_HEADER include/spdk/log.h 00:03:26.720 TEST_HEADER include/spdk/lvol.h 00:03:26.720 TEST_HEADER include/spdk/md5.h 00:03:26.979 TEST_HEADER include/spdk/memory.h 00:03:26.979 CC test/app/bdev_svc/bdev_svc.o 00:03:26.979 TEST_HEADER include/spdk/mmio.h 00:03:26.979 TEST_HEADER include/spdk/nbd.h 00:03:26.979 TEST_HEADER include/spdk/net.h 00:03:26.979 TEST_HEADER include/spdk/notify.h 00:03:26.979 TEST_HEADER include/spdk/nvme.h 00:03:26.979 TEST_HEADER include/spdk/nvme_intel.h 00:03:26.979 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:26.979 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:26.979 TEST_HEADER include/spdk/nvme_spec.h 00:03:26.979 TEST_HEADER include/spdk/nvme_zns.h 00:03:26.979 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:26.979 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:26.979 TEST_HEADER include/spdk/nvmf.h 00:03:26.979 TEST_HEADER include/spdk/nvmf_spec.h 00:03:26.979 TEST_HEADER include/spdk/nvmf_transport.h 00:03:26.979 TEST_HEADER include/spdk/opal.h 00:03:26.979 TEST_HEADER include/spdk/opal_spec.h 00:03:26.979 TEST_HEADER include/spdk/pci_ids.h 00:03:26.979 TEST_HEADER include/spdk/pipe.h 00:03:26.979 TEST_HEADER include/spdk/queue.h 00:03:26.979 TEST_HEADER include/spdk/reduce.h 00:03:26.979 TEST_HEADER include/spdk/rpc.h 00:03:26.979 TEST_HEADER include/spdk/scheduler.h 00:03:26.979 TEST_HEADER include/spdk/scsi.h 00:03:26.979 TEST_HEADER include/spdk/scsi_spec.h 00:03:26.979 TEST_HEADER include/spdk/sock.h 00:03:26.979 TEST_HEADER include/spdk/stdinc.h 00:03:26.979 TEST_HEADER include/spdk/string.h 00:03:26.979 TEST_HEADER include/spdk/thread.h 00:03:26.979 TEST_HEADER include/spdk/trace.h 00:03:26.979 TEST_HEADER include/spdk/trace_parser.h 00:03:26.979 TEST_HEADER include/spdk/tree.h 00:03:26.979 TEST_HEADER include/spdk/ublk.h 00:03:26.979 TEST_HEADER include/spdk/util.h 00:03:26.979 TEST_HEADER include/spdk/uuid.h 00:03:26.979 TEST_HEADER include/spdk/version.h 00:03:26.979 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:26.979 LINK rpc_client_test 00:03:26.979 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:26.979 CC test/env/mem_callbacks/mem_callbacks.o 00:03:26.979 TEST_HEADER include/spdk/vhost.h 00:03:26.979 TEST_HEADER include/spdk/vmd.h 00:03:26.979 LINK poller_perf 00:03:26.979 TEST_HEADER include/spdk/xor.h 00:03:26.979 TEST_HEADER include/spdk/zipf.h 00:03:26.979 CXX test/cpp_headers/accel.o 00:03:26.979 LINK nvmf_tgt 00:03:26.979 LINK zipf 00:03:26.979 LINK bdev_svc 00:03:26.979 LINK spdk_trace_record 00:03:26.979 CXX test/cpp_headers/accel_module.o 00:03:26.979 LINK spdk_trace 00:03:26.979 CC app/iscsi_tgt/iscsi_tgt.o 00:03:27.238 CXX test/cpp_headers/assert.o 00:03:27.238 CC app/spdk_tgt/spdk_tgt.o 00:03:27.238 CC examples/ioat/perf/perf.o 00:03:27.238 CC examples/vmd/lsvmd/lsvmd.o 00:03:27.238 CC test/event/event_perf/event_perf.o 00:03:27.238 CC test/event/reactor/reactor.o 00:03:27.238 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:27.238 CXX test/cpp_headers/barrier.o 00:03:27.238 LINK iscsi_tgt 00:03:27.238 LINK mem_callbacks 00:03:27.238 LINK lsvmd 00:03:27.238 LINK test_dma 00:03:27.238 LINK spdk_tgt 00:03:27.238 LINK event_perf 00:03:27.497 LINK reactor 00:03:27.497 LINK ioat_perf 00:03:27.497 CXX test/cpp_headers/base64.o 00:03:27.497 CXX test/cpp_headers/bdev.o 00:03:27.497 CC test/env/vtophys/vtophys.o 00:03:27.497 CC examples/vmd/led/led.o 00:03:27.497 CC test/event/reactor_perf/reactor_perf.o 00:03:27.497 CC app/spdk_lspci/spdk_lspci.o 00:03:27.497 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:27.497 CC test/app/histogram_perf/histogram_perf.o 00:03:27.497 CXX test/cpp_headers/bdev_module.o 00:03:27.755 CC examples/ioat/verify/verify.o 00:03:27.755 CC test/app/jsoncat/jsoncat.o 00:03:27.755 LINK nvme_fuzz 00:03:27.755 LINK reactor_perf 00:03:27.755 LINK vtophys 00:03:27.756 LINK led 00:03:27.756 LINK spdk_lspci 00:03:27.756 LINK histogram_perf 00:03:27.756 LINK jsoncat 00:03:27.756 LINK verify 00:03:27.756 CXX test/cpp_headers/bdev_zone.o 00:03:27.756 CC test/event/app_repeat/app_repeat.o 00:03:27.756 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:27.756 CC test/event/scheduler/scheduler.o 00:03:28.014 CC app/spdk_nvme_perf/perf.o 00:03:28.014 CXX test/cpp_headers/bit_array.o 00:03:28.014 CC test/app/stub/stub.o 00:03:28.014 LINK app_repeat 00:03:28.014 CC test/blobfs/mkfs/mkfs.o 00:03:28.014 CC test/accel/dif/dif.o 00:03:28.014 LINK env_dpdk_post_init 00:03:28.014 CC examples/idxd/perf/perf.o 00:03:28.014 LINK stub 00:03:28.014 CXX test/cpp_headers/bit_pool.o 00:03:28.014 LINK scheduler 00:03:28.273 LINK mkfs 00:03:28.273 CC test/env/memory/memory_ut.o 00:03:28.273 CXX test/cpp_headers/blob_bdev.o 00:03:28.273 CC app/spdk_nvme_identify/identify.o 00:03:28.273 CC test/lvol/esnap/esnap.o 00:03:28.273 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:28.273 CXX test/cpp_headers/blobfs_bdev.o 00:03:28.273 LINK idxd_perf 00:03:28.532 CC test/nvme/aer/aer.o 00:03:28.532 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:28.532 CXX test/cpp_headers/blobfs.o 00:03:28.532 LINK spdk_nvme_perf 00:03:28.532 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:28.532 LINK aer 00:03:28.789 CXX test/cpp_headers/blob.o 00:03:28.789 LINK dif 00:03:28.789 LINK interrupt_tgt 00:03:28.789 CXX test/cpp_headers/conf.o 00:03:28.789 CC test/nvme/reset/reset.o 00:03:28.789 CC examples/thread/thread/thread_ex.o 00:03:28.789 LINK vhost_fuzz 00:03:28.789 CC test/nvme/sgl/sgl.o 00:03:29.048 LINK spdk_nvme_identify 00:03:29.048 CXX test/cpp_headers/config.o 00:03:29.048 CXX test/cpp_headers/cpuset.o 00:03:29.048 CXX test/cpp_headers/crc16.o 00:03:29.048 LINK thread 00:03:29.048 LINK memory_ut 00:03:29.048 LINK reset 00:03:29.048 CXX test/cpp_headers/crc32.o 00:03:29.048 CC test/bdev/bdevio/bdevio.o 00:03:29.048 CC app/spdk_nvme_discover/discovery_aer.o 00:03:29.048 CC app/spdk_top/spdk_top.o 00:03:29.048 LINK sgl 00:03:29.307 CXX test/cpp_headers/crc64.o 00:03:29.307 LINK iscsi_fuzz 00:03:29.307 CC test/env/pci/pci_ut.o 00:03:29.307 LINK spdk_nvme_discover 00:03:29.307 CXX test/cpp_headers/dif.o 00:03:29.307 CC examples/sock/hello_world/hello_sock.o 00:03:29.307 CC test/nvme/e2edp/nvme_dp.o 00:03:29.307 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:29.307 CXX test/cpp_headers/dma.o 00:03:29.566 CC examples/accel/perf/accel_perf.o 00:03:29.566 LINK bdevio 00:03:29.566 CXX test/cpp_headers/endian.o 00:03:29.566 CC examples/blob/hello_world/hello_blob.o 00:03:29.566 LINK hello_sock 00:03:29.566 LINK nvme_dp 00:03:29.566 LINK pci_ut 00:03:29.566 LINK hello_fsdev 00:03:29.566 CXX test/cpp_headers/env_dpdk.o 00:03:29.566 CXX test/cpp_headers/env.o 00:03:29.824 LINK hello_blob 00:03:29.824 CC test/nvme/overhead/overhead.o 00:03:29.825 CXX test/cpp_headers/event.o 00:03:29.825 CC test/nvme/err_injection/err_injection.o 00:03:29.825 CC test/nvme/startup/startup.o 00:03:29.825 CC test/nvme/reserve/reserve.o 00:03:29.825 LINK accel_perf 00:03:29.825 CC examples/nvme/hello_world/hello_world.o 00:03:29.825 LINK spdk_top 00:03:29.825 CXX test/cpp_headers/fd_group.o 00:03:30.085 CC examples/blob/cli/blobcli.o 00:03:30.085 LINK reserve 00:03:30.085 LINK startup 00:03:30.085 LINK err_injection 00:03:30.085 CXX test/cpp_headers/fd.o 00:03:30.085 LINK overhead 00:03:30.085 CC app/vhost/vhost.o 00:03:30.085 LINK hello_world 00:03:30.085 CC app/spdk_dd/spdk_dd.o 00:03:30.085 CXX test/cpp_headers/file.o 00:03:30.085 CC test/nvme/simple_copy/simple_copy.o 00:03:30.085 CXX test/cpp_headers/fsdev.o 00:03:30.345 LINK vhost 00:03:30.345 CC test/nvme/connect_stress/connect_stress.o 00:03:30.345 CC app/fio/nvme/fio_plugin.o 00:03:30.345 CC examples/nvme/reconnect/reconnect.o 00:03:30.345 CC test/nvme/boot_partition/boot_partition.o 00:03:30.345 CXX test/cpp_headers/fsdev_module.o 00:03:30.345 CXX test/cpp_headers/ftl.o 00:03:30.345 LINK spdk_dd 00:03:30.345 LINK simple_copy 00:03:30.345 LINK blobcli 00:03:30.345 LINK connect_stress 00:03:30.345 LINK boot_partition 00:03:30.602 CXX test/cpp_headers/fuse_dispatcher.o 00:03:30.602 CXX test/cpp_headers/gpt_spec.o 00:03:30.602 CC app/fio/bdev/fio_plugin.o 00:03:30.602 CC test/nvme/compliance/nvme_compliance.o 00:03:30.602 CXX test/cpp_headers/hexlify.o 00:03:30.602 CC test/nvme/fused_ordering/fused_ordering.o 00:03:30.602 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:30.602 LINK reconnect 00:03:30.602 CC test/nvme/fdp/fdp.o 00:03:30.602 CC test/nvme/cuse/cuse.o 00:03:30.860 CXX test/cpp_headers/histogram_data.o 00:03:30.860 LINK fused_ordering 00:03:30.860 LINK spdk_nvme 00:03:30.860 LINK doorbell_aers 00:03:30.860 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:30.860 CXX test/cpp_headers/idxd.o 00:03:30.860 LINK nvme_compliance 00:03:30.860 LINK fdp 00:03:30.860 CC examples/nvme/arbitration/arbitration.o 00:03:30.860 CC examples/nvme/hotplug/hotplug.o 00:03:31.118 CXX test/cpp_headers/idxd_spec.o 00:03:31.118 LINK spdk_bdev 00:03:31.118 CXX test/cpp_headers/init.o 00:03:31.118 CC examples/bdev/hello_world/hello_bdev.o 00:03:31.118 CXX test/cpp_headers/ioat.o 00:03:31.118 CC examples/bdev/bdevperf/bdevperf.o 00:03:31.118 CXX test/cpp_headers/ioat_spec.o 00:03:31.118 CXX test/cpp_headers/iscsi_spec.o 00:03:31.118 LINK hotplug 00:03:31.419 LINK nvme_manage 00:03:31.419 LINK arbitration 00:03:31.419 CXX test/cpp_headers/json.o 00:03:31.419 CXX test/cpp_headers/jsonrpc.o 00:03:31.419 LINK hello_bdev 00:03:31.419 CXX test/cpp_headers/keyring.o 00:03:31.419 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:31.419 CXX test/cpp_headers/keyring_module.o 00:03:31.419 CXX test/cpp_headers/likely.o 00:03:31.419 CC examples/nvme/abort/abort.o 00:03:31.419 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:31.419 CXX test/cpp_headers/log.o 00:03:31.419 CXX test/cpp_headers/lvol.o 00:03:31.723 CXX test/cpp_headers/md5.o 00:03:31.723 LINK cmb_copy 00:03:31.723 CXX test/cpp_headers/memory.o 00:03:31.723 CXX test/cpp_headers/mmio.o 00:03:31.723 CXX test/cpp_headers/nbd.o 00:03:31.723 LINK pmr_persistence 00:03:31.723 CXX test/cpp_headers/net.o 00:03:31.723 CXX test/cpp_headers/notify.o 00:03:31.723 CXX test/cpp_headers/nvme.o 00:03:31.723 CXX test/cpp_headers/nvme_intel.o 00:03:31.723 CXX test/cpp_headers/nvme_ocssd.o 00:03:31.723 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:31.723 CXX test/cpp_headers/nvme_spec.o 00:03:31.723 LINK abort 00:03:31.723 LINK bdevperf 00:03:31.723 CXX test/cpp_headers/nvme_zns.o 00:03:31.723 CXX test/cpp_headers/nvmf_cmd.o 00:03:31.980 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:31.980 CXX test/cpp_headers/nvmf.o 00:03:31.980 CXX test/cpp_headers/nvmf_spec.o 00:03:31.980 CXX test/cpp_headers/nvmf_transport.o 00:03:31.980 CXX test/cpp_headers/opal.o 00:03:31.980 CXX test/cpp_headers/opal_spec.o 00:03:31.980 LINK cuse 00:03:31.980 CXX test/cpp_headers/pci_ids.o 00:03:31.980 CXX test/cpp_headers/pipe.o 00:03:31.980 CXX test/cpp_headers/queue.o 00:03:31.980 CXX test/cpp_headers/reduce.o 00:03:31.981 CXX test/cpp_headers/rpc.o 00:03:31.981 CXX test/cpp_headers/scheduler.o 00:03:31.981 CXX test/cpp_headers/scsi.o 00:03:31.981 CXX test/cpp_headers/scsi_spec.o 00:03:31.981 CXX test/cpp_headers/sock.o 00:03:31.981 CC examples/nvmf/nvmf/nvmf.o 00:03:31.981 CXX test/cpp_headers/stdinc.o 00:03:32.238 CXX test/cpp_headers/string.o 00:03:32.238 CXX test/cpp_headers/thread.o 00:03:32.238 CXX test/cpp_headers/trace.o 00:03:32.238 CXX test/cpp_headers/trace_parser.o 00:03:32.238 CXX test/cpp_headers/tree.o 00:03:32.238 CXX test/cpp_headers/ublk.o 00:03:32.238 CXX test/cpp_headers/util.o 00:03:32.238 CXX test/cpp_headers/uuid.o 00:03:32.238 CXX test/cpp_headers/version.o 00:03:32.238 CXX test/cpp_headers/vfio_user_pci.o 00:03:32.238 CXX test/cpp_headers/vfio_user_spec.o 00:03:32.238 CXX test/cpp_headers/vhost.o 00:03:32.238 CXX test/cpp_headers/vmd.o 00:03:32.238 CXX test/cpp_headers/xor.o 00:03:32.238 CXX test/cpp_headers/zipf.o 00:03:32.238 LINK nvmf 00:03:32.803 LINK esnap 00:03:33.078 00:03:33.078 real 1m5.667s 00:03:33.078 user 6m11.515s 00:03:33.078 sys 1m5.155s 00:03:33.078 08:57:11 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:33.078 ************************************ 00:03:33.078 END TEST make 00:03:33.078 ************************************ 00:03:33.078 08:57:11 make -- common/autotest_common.sh@10 -- $ set +x 00:03:33.078 08:57:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:33.078 08:57:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:33.078 08:57:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:33.078 08:57:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.078 08:57:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:33.078 08:57:11 -- pm/common@44 -- $ pid=5065 00:03:33.078 08:57:11 -- pm/common@50 -- $ kill -TERM 5065 00:03:33.078 08:57:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.078 08:57:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:33.078 08:57:11 -- pm/common@44 -- $ pid=5066 00:03:33.078 08:57:11 -- pm/common@50 -- $ kill -TERM 5066 00:03:33.079 08:57:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:33.079 08:57:11 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:33.079 08:57:11 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:33.079 08:57:11 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:33.079 08:57:11 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:33.079 08:57:11 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:33.079 08:57:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:33.079 08:57:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:33.079 08:57:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:33.079 08:57:11 -- scripts/common.sh@336 -- # IFS=.-: 00:03:33.079 08:57:11 -- scripts/common.sh@336 -- # read -ra ver1 00:03:33.079 08:57:11 -- scripts/common.sh@337 -- # IFS=.-: 00:03:33.079 08:57:11 -- scripts/common.sh@337 -- # read -ra ver2 00:03:33.079 08:57:11 -- scripts/common.sh@338 -- # local 'op=<' 00:03:33.079 08:57:11 -- scripts/common.sh@340 -- # ver1_l=2 00:03:33.079 08:57:11 -- scripts/common.sh@341 -- # ver2_l=1 00:03:33.079 08:57:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:33.079 08:57:11 -- scripts/common.sh@344 -- # case "$op" in 00:03:33.079 08:57:11 -- scripts/common.sh@345 -- # : 1 00:03:33.079 08:57:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:33.079 08:57:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:33.079 08:57:11 -- scripts/common.sh@365 -- # decimal 1 00:03:33.079 08:57:11 -- scripts/common.sh@353 -- # local d=1 00:03:33.079 08:57:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:33.079 08:57:11 -- scripts/common.sh@355 -- # echo 1 00:03:33.079 08:57:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:33.079 08:57:11 -- scripts/common.sh@366 -- # decimal 2 00:03:33.079 08:57:11 -- scripts/common.sh@353 -- # local d=2 00:03:33.079 08:57:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:33.079 08:57:11 -- scripts/common.sh@355 -- # echo 2 00:03:33.079 08:57:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:33.079 08:57:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:33.079 08:57:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:33.079 08:57:11 -- scripts/common.sh@368 -- # return 0 00:03:33.079 08:57:11 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:33.079 08:57:11 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:33.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.079 --rc genhtml_branch_coverage=1 00:03:33.079 --rc genhtml_function_coverage=1 00:03:33.079 --rc genhtml_legend=1 00:03:33.079 --rc geninfo_all_blocks=1 00:03:33.079 --rc geninfo_unexecuted_blocks=1 00:03:33.079 00:03:33.079 ' 00:03:33.079 08:57:11 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:33.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.079 --rc genhtml_branch_coverage=1 00:03:33.079 --rc genhtml_function_coverage=1 00:03:33.079 --rc genhtml_legend=1 00:03:33.079 --rc geninfo_all_blocks=1 00:03:33.079 --rc geninfo_unexecuted_blocks=1 00:03:33.079 00:03:33.079 ' 00:03:33.079 08:57:11 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:33.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.079 --rc genhtml_branch_coverage=1 00:03:33.079 --rc genhtml_function_coverage=1 00:03:33.079 --rc genhtml_legend=1 00:03:33.079 --rc geninfo_all_blocks=1 00:03:33.079 --rc geninfo_unexecuted_blocks=1 00:03:33.079 00:03:33.079 ' 00:03:33.079 08:57:11 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:33.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:33.079 --rc genhtml_branch_coverage=1 00:03:33.079 --rc genhtml_function_coverage=1 00:03:33.079 --rc genhtml_legend=1 00:03:33.079 --rc geninfo_all_blocks=1 00:03:33.079 --rc geninfo_unexecuted_blocks=1 00:03:33.079 00:03:33.079 ' 00:03:33.079 08:57:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:33.079 08:57:11 -- nvmf/common.sh@7 -- # uname -s 00:03:33.079 08:57:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:33.079 08:57:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:33.079 08:57:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:33.079 08:57:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:33.079 08:57:11 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:33.079 08:57:11 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:33.079 08:57:11 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:33.079 08:57:11 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:33.079 08:57:11 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fd767b7-dbef-47a7-9446-009fc2cf8346 00:03:33.079 08:57:11 -- nvmf/common.sh@16 -- # NVME_HOSTID=3fd767b7-dbef-47a7-9446-009fc2cf8346 00:03:33.079 08:57:11 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:33.079 08:57:11 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:33.079 08:57:11 -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:03:33.079 08:57:11 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:33.079 08:57:11 -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:33.079 08:57:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:33.079 08:57:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:33.079 08:57:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:33.079 08:57:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:33.079 08:57:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.079 08:57:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.079 08:57:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.079 08:57:11 -- paths/export.sh@5 -- # export PATH 00:03:33.079 08:57:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:33.079 08:57:11 -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:03:33.079 08:57:11 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:33.079 08:57:11 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:33.079 08:57:11 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:33.079 08:57:11 -- nvmf/common.sh@50 -- # : 0 00:03:33.079 08:57:11 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:33.079 08:57:11 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:33.079 08:57:11 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:33.079 08:57:11 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:33.079 08:57:11 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:33.079 08:57:11 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:33.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:33.079 08:57:11 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:33.079 08:57:11 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:33.079 08:57:11 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:33.079 08:57:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:33.079 08:57:11 -- spdk/autotest.sh@32 -- # uname -s 00:03:33.079 08:57:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:33.079 08:57:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:33.079 08:57:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.079 08:57:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:33.079 08:57:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:33.079 08:57:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:33.337 08:57:12 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:33.337 08:57:12 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:33.337 08:57:12 -- spdk/autotest.sh@48 -- # udevadm_pid=54251 00:03:33.337 08:57:12 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:33.337 08:57:12 -- pm/common@17 -- # local monitor 00:03:33.337 08:57:12 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:33.337 08:57:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.337 08:57:12 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:33.337 08:57:12 -- pm/common@25 -- # sleep 1 00:03:33.337 08:57:12 -- pm/common@21 -- # date +%s 00:03:33.337 08:57:12 -- pm/common@21 -- # date +%s 00:03:33.337 08:57:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732093032 00:03:33.337 08:57:12 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732093032 00:03:33.337 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732093032_collect-cpu-load.pm.log 00:03:33.337 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732093032_collect-vmstat.pm.log 00:03:34.269 08:57:13 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:34.269 08:57:13 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:34.269 08:57:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.269 08:57:13 -- common/autotest_common.sh@10 -- # set +x 00:03:34.269 08:57:13 -- spdk/autotest.sh@59 -- # create_test_list 00:03:34.269 08:57:13 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:34.269 08:57:13 -- common/autotest_common.sh@10 -- # set +x 00:03:34.269 08:57:13 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:34.269 08:57:13 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:34.269 08:57:13 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:34.269 08:57:13 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:34.269 08:57:13 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:34.269 08:57:13 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:34.269 08:57:13 -- common/autotest_common.sh@1457 -- # uname 00:03:34.269 08:57:13 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:34.269 08:57:13 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:34.269 08:57:13 -- common/autotest_common.sh@1477 -- # uname 00:03:34.269 08:57:13 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:34.269 08:57:13 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:34.269 08:57:13 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:34.269 lcov: LCOV version 1.15 00:03:34.269 08:57:13 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:49.144 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:49.144 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:04.015 08:57:40 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:04.015 08:57:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.015 08:57:40 -- common/autotest_common.sh@10 -- # set +x 00:04:04.015 08:57:40 -- spdk/autotest.sh@78 -- # rm -f 00:04:04.015 08:57:40 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.015 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.015 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:04.015 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:04.015 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:04.015 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:04.015 08:57:41 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:04.015 08:57:41 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:04.015 08:57:41 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:04.015 08:57:41 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.015 08:57:41 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:04.015 08:57:41 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:04.015 08:57:41 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.015 08:57:41 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:04.015 08:57:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.015 08:57:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.015 08:57:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:04.015 08:57:41 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:04.015 08:57:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:04.015 No valid GPT data, bailing 00:04:04.015 08:57:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.015 08:57:41 -- scripts/common.sh@394 -- # pt= 00:04:04.015 08:57:41 -- scripts/common.sh@395 -- # return 1 00:04:04.015 08:57:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:04.016 1+0 records in 00:04:04.016 1+0 records out 00:04:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0110901 s, 94.6 MB/s 00:04:04.016 08:57:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.016 08:57:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.016 08:57:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:04.016 08:57:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:04.016 08:57:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:04.016 No valid GPT data, bailing 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # pt= 00:04:04.016 08:57:41 -- scripts/common.sh@395 -- # return 1 00:04:04.016 08:57:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:04.016 1+0 records in 00:04:04.016 1+0 records out 00:04:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043703 s, 240 MB/s 00:04:04.016 08:57:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.016 08:57:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.016 08:57:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:04.016 08:57:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:04.016 08:57:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:04.016 No valid GPT data, bailing 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # pt= 00:04:04.016 08:57:41 -- scripts/common.sh@395 -- # return 1 00:04:04.016 08:57:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:04.016 1+0 records in 00:04:04.016 1+0 records out 00:04:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00320885 s, 327 MB/s 00:04:04.016 08:57:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.016 08:57:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.016 08:57:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:04.016 08:57:41 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:04.016 08:57:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:04.016 No valid GPT data, bailing 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # pt= 00:04:04.016 08:57:41 -- scripts/common.sh@395 -- # return 1 00:04:04.016 08:57:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:04.016 1+0 records in 00:04:04.016 1+0 records out 00:04:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458009 s, 229 MB/s 00:04:04.016 08:57:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.016 08:57:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.016 08:57:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:04.016 08:57:41 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:04.016 08:57:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:04.016 No valid GPT data, bailing 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # pt= 00:04:04.016 08:57:41 -- scripts/common.sh@395 -- # return 1 00:04:04.016 08:57:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:04.016 1+0 records in 00:04:04.016 1+0 records out 00:04:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043439 s, 241 MB/s 00:04:04.016 08:57:41 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.016 08:57:41 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.016 08:57:41 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:04.016 08:57:41 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:04.016 08:57:41 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:04.016 No valid GPT data, bailing 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:04.016 08:57:41 -- scripts/common.sh@394 -- # pt= 00:04:04.016 08:57:41 -- scripts/common.sh@395 -- # return 1 00:04:04.016 08:57:41 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:04.016 1+0 records in 00:04:04.016 1+0 records out 00:04:04.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00446419 s, 235 MB/s 00:04:04.016 08:57:41 -- spdk/autotest.sh@105 -- # sync 00:04:04.016 08:57:42 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:04.016 08:57:42 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:04.016 08:57:42 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:04.951 08:57:43 -- spdk/autotest.sh@111 -- # uname -s 00:04:04.951 08:57:43 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:04.951 08:57:43 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:04.951 08:57:43 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:05.525 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.801 Hugepages 00:04:05.801 node hugesize free / total 00:04:05.801 node0 1048576kB 0 / 0 00:04:05.801 node0 2048kB 0 / 0 00:04:05.801 00:04:05.801 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:05.801 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:06.102 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:06.102 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:06.102 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:06.102 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:06.102 08:57:44 -- spdk/autotest.sh@117 -- # uname -s 00:04:06.102 08:57:44 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:06.102 08:57:44 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:06.102 08:57:44 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.670 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.239 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.239 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.239 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.239 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.239 08:57:45 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:08.174 08:57:46 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:08.174 08:57:46 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:08.174 08:57:46 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:08.174 08:57:46 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:08.174 08:57:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.174 08:57:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.174 08:57:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.174 08:57:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.174 08:57:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.174 08:57:47 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:08.174 08:57:47 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:08.174 08:57:47 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:08.432 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.690 Waiting for block devices as requested 00:04:08.690 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:08.690 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:08.690 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:08.947 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:14.225 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:14.225 08:57:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.225 08:57:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:14.225 08:57:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:14.225 08:57:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:14.225 08:57:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:14.225 08:57:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:14.225 08:57:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:14.225 08:57:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:14.225 08:57:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:14.225 08:57:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:14.225 08:57:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:14.225 08:57:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.225 08:57:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.225 08:57:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.225 08:57:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.225 08:57:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.225 08:57:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.225 08:57:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:14.225 08:57:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.225 08:57:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.225 08:57:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.225 08:57:52 -- common/autotest_common.sh@1543 -- # continue 00:04:14.225 08:57:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.225 08:57:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:14.225 08:57:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:14.225 08:57:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:14.225 08:57:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.225 08:57:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.226 08:57:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.226 08:57:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.226 08:57:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1543 -- # continue 00:04:14.226 08:57:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.226 08:57:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:14.226 08:57:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.226 08:57:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.226 08:57:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.226 08:57:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1543 -- # continue 00:04:14.226 08:57:52 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:14.226 08:57:52 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:14.226 08:57:52 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:14.226 08:57:52 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:14.226 08:57:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:14.226 08:57:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:14.226 08:57:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:14.226 08:57:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:14.226 08:57:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:14.226 08:57:52 -- common/autotest_common.sh@1543 -- # continue 00:04:14.226 08:57:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:14.226 08:57:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:14.226 08:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:14.226 08:57:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:14.226 08:57:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:14.226 08:57:52 -- common/autotest_common.sh@10 -- # set +x 00:04:14.226 08:57:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:14.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:15.057 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.057 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.057 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.318 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:15.318 08:57:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:15.318 08:57:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:15.318 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.318 08:57:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:15.318 08:57:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:15.318 08:57:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:15.318 08:57:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:15.318 08:57:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:15.318 08:57:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:15.318 08:57:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:15.318 08:57:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:15.318 08:57:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:15.318 08:57:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:15.318 08:57:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:15.318 08:57:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:15.318 08:57:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:15.318 08:57:54 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:15.318 08:57:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:15.318 08:57:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.318 08:57:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.318 08:57:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.318 08:57:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.318 08:57:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.318 08:57:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.318 08:57:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:15.318 08:57:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:15.318 08:57:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:15.318 08:57:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:15.318 08:57:54 -- common/autotest_common.sh@1572 -- # return 0 00:04:15.318 08:57:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:15.318 08:57:54 -- common/autotest_common.sh@1580 -- # return 0 00:04:15.318 08:57:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:15.318 08:57:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:15.318 08:57:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.318 08:57:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:15.318 08:57:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:15.318 08:57:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.318 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.318 08:57:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:15.318 08:57:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.318 08:57:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.318 08:57:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.318 08:57:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.318 ************************************ 00:04:15.318 START TEST env 00:04:15.318 ************************************ 00:04:15.318 08:57:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:15.577 * Looking for test storage... 00:04:15.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.577 08:57:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.577 08:57:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.577 08:57:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.577 08:57:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.577 08:57:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.577 08:57:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.577 08:57:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.577 08:57:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.577 08:57:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.577 08:57:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.577 08:57:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.577 08:57:54 env -- scripts/common.sh@344 -- # case "$op" in 00:04:15.577 08:57:54 env -- scripts/common.sh@345 -- # : 1 00:04:15.577 08:57:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.577 08:57:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.577 08:57:54 env -- scripts/common.sh@365 -- # decimal 1 00:04:15.577 08:57:54 env -- scripts/common.sh@353 -- # local d=1 00:04:15.577 08:57:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.577 08:57:54 env -- scripts/common.sh@355 -- # echo 1 00:04:15.577 08:57:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.577 08:57:54 env -- scripts/common.sh@366 -- # decimal 2 00:04:15.577 08:57:54 env -- scripts/common.sh@353 -- # local d=2 00:04:15.577 08:57:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.577 08:57:54 env -- scripts/common.sh@355 -- # echo 2 00:04:15.577 08:57:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.577 08:57:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.577 08:57:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.577 08:57:54 env -- scripts/common.sh@368 -- # return 0 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.577 --rc genhtml_branch_coverage=1 00:04:15.577 --rc genhtml_function_coverage=1 00:04:15.577 --rc genhtml_legend=1 00:04:15.577 --rc geninfo_all_blocks=1 00:04:15.577 --rc geninfo_unexecuted_blocks=1 00:04:15.577 00:04:15.577 ' 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.577 --rc genhtml_branch_coverage=1 00:04:15.577 --rc genhtml_function_coverage=1 00:04:15.577 --rc genhtml_legend=1 00:04:15.577 --rc geninfo_all_blocks=1 00:04:15.577 --rc geninfo_unexecuted_blocks=1 00:04:15.577 00:04:15.577 ' 00:04:15.577 08:57:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.577 --rc genhtml_branch_coverage=1 00:04:15.577 --rc genhtml_function_coverage=1 00:04:15.577 --rc genhtml_legend=1 00:04:15.577 --rc geninfo_all_blocks=1 00:04:15.577 --rc geninfo_unexecuted_blocks=1 00:04:15.577 00:04:15.578 ' 00:04:15.578 08:57:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.578 --rc genhtml_branch_coverage=1 00:04:15.578 --rc genhtml_function_coverage=1 00:04:15.578 --rc genhtml_legend=1 00:04:15.578 --rc geninfo_all_blocks=1 00:04:15.578 --rc geninfo_unexecuted_blocks=1 00:04:15.578 00:04:15.578 ' 00:04:15.578 08:57:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.578 08:57:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.578 08:57:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.578 08:57:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.578 ************************************ 00:04:15.578 START TEST env_memory 00:04:15.578 ************************************ 00:04:15.578 08:57:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:15.578 00:04:15.578 00:04:15.578 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.578 http://cunit.sourceforge.net/ 00:04:15.578 00:04:15.578 00:04:15.578 Suite: memory 00:04:15.578 Test: alloc and free memory map ...[2024-11-20 08:57:54.434470] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:15.578 passed 00:04:15.578 Test: mem map translation ...[2024-11-20 08:57:54.473346] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:15.578 [2024-11-20 08:57:54.473401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:15.578 [2024-11-20 08:57:54.473463] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:15.578 [2024-11-20 08:57:54.473480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:15.836 passed 00:04:15.836 Test: mem map registration ...[2024-11-20 08:57:54.541801] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:15.836 [2024-11-20 08:57:54.541840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:15.836 passed 00:04:15.836 Test: mem map adjacent registrations ...passed 00:04:15.836 00:04:15.836 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.836 suites 1 1 n/a 0 0 00:04:15.836 tests 4 4 4 0 0 00:04:15.836 asserts 152 152 152 0 n/a 00:04:15.836 00:04:15.836 Elapsed time = 0.234 seconds 00:04:15.836 ************************************ 00:04:15.836 END TEST env_memory 00:04:15.836 ************************************ 00:04:15.836 00:04:15.836 real 0m0.271s 00:04:15.836 user 0m0.240s 00:04:15.836 sys 0m0.023s 00:04:15.836 08:57:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.836 08:57:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:15.836 08:57:54 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.836 08:57:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.836 08:57:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.836 08:57:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.836 ************************************ 00:04:15.836 START TEST env_vtophys 00:04:15.836 ************************************ 00:04:15.836 08:57:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:15.836 EAL: lib.eal log level changed from notice to debug 00:04:15.836 EAL: Detected lcore 0 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 1 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 2 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 3 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 4 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 5 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 6 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 7 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 8 as core 0 on socket 0 00:04:15.836 EAL: Detected lcore 9 as core 0 on socket 0 00:04:15.836 EAL: Maximum logical cores by configuration: 128 00:04:15.836 EAL: Detected CPU lcores: 10 00:04:15.836 EAL: Detected NUMA nodes: 1 00:04:15.836 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:15.836 EAL: Detected shared linkage of DPDK 00:04:15.836 EAL: No shared files mode enabled, IPC will be disabled 00:04:15.836 EAL: Selected IOVA mode 'PA' 00:04:15.836 EAL: Probing VFIO support... 00:04:15.836 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:15.836 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:15.836 EAL: Ask a virtual area of 0x2e000 bytes 00:04:15.836 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:15.836 EAL: Setting up physically contiguous memory... 00:04:15.836 EAL: Setting maximum number of open files to 524288 00:04:15.836 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:15.836 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:15.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.836 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:15.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.836 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:15.836 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:15.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.836 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:15.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.836 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:15.836 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:15.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.836 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:15.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.836 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:15.836 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:15.836 EAL: Ask a virtual area of 0x61000 bytes 00:04:15.836 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:15.836 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:15.836 EAL: Ask a virtual area of 0x400000000 bytes 00:04:15.836 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:15.836 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:15.836 EAL: Hugepages will be freed exactly as allocated. 00:04:15.836 EAL: No shared files mode enabled, IPC is disabled 00:04:15.836 EAL: No shared files mode enabled, IPC is disabled 00:04:16.097 EAL: TSC frequency is ~2600000 KHz 00:04:16.097 EAL: Main lcore 0 is ready (tid=7fefe340da40;cpuset=[0]) 00:04:16.097 EAL: Trying to obtain current memory policy. 00:04:16.097 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.097 EAL: Restoring previous memory policy: 0 00:04:16.097 EAL: request: mp_malloc_sync 00:04:16.097 EAL: No shared files mode enabled, IPC is disabled 00:04:16.097 EAL: Heap on socket 0 was expanded by 2MB 00:04:16.097 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:16.097 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:16.097 EAL: Mem event callback 'spdk:(nil)' registered 00:04:16.097 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:16.097 00:04:16.097 00:04:16.097 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.097 http://cunit.sourceforge.net/ 00:04:16.097 00:04:16.097 00:04:16.097 Suite: components_suite 00:04:16.356 Test: vtophys_malloc_test ...passed 00:04:16.356 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:16.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.356 EAL: Restoring previous memory policy: 4 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was expanded by 4MB 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was shrunk by 4MB 00:04:16.356 EAL: Trying to obtain current memory policy. 00:04:16.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.356 EAL: Restoring previous memory policy: 4 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was expanded by 6MB 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was shrunk by 6MB 00:04:16.356 EAL: Trying to obtain current memory policy. 00:04:16.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.356 EAL: Restoring previous memory policy: 4 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was expanded by 10MB 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was shrunk by 10MB 00:04:16.356 EAL: Trying to obtain current memory policy. 00:04:16.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.356 EAL: Restoring previous memory policy: 4 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was expanded by 18MB 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was shrunk by 18MB 00:04:16.356 EAL: Trying to obtain current memory policy. 00:04:16.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.356 EAL: Restoring previous memory policy: 4 00:04:16.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.356 EAL: request: mp_malloc_sync 00:04:16.356 EAL: No shared files mode enabled, IPC is disabled 00:04:16.356 EAL: Heap on socket 0 was expanded by 34MB 00:04:16.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.658 EAL: request: mp_malloc_sync 00:04:16.658 EAL: No shared files mode enabled, IPC is disabled 00:04:16.658 EAL: Heap on socket 0 was shrunk by 34MB 00:04:16.658 EAL: Trying to obtain current memory policy. 00:04:16.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.658 EAL: Restoring previous memory policy: 4 00:04:16.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.658 EAL: request: mp_malloc_sync 00:04:16.658 EAL: No shared files mode enabled, IPC is disabled 00:04:16.658 EAL: Heap on socket 0 was expanded by 66MB 00:04:16.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.658 EAL: request: mp_malloc_sync 00:04:16.658 EAL: No shared files mode enabled, IPC is disabled 00:04:16.658 EAL: Heap on socket 0 was shrunk by 66MB 00:04:16.658 EAL: Trying to obtain current memory policy. 00:04:16.658 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.658 EAL: Restoring previous memory policy: 4 00:04:16.658 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.658 EAL: request: mp_malloc_sync 00:04:16.658 EAL: No shared files mode enabled, IPC is disabled 00:04:16.658 EAL: Heap on socket 0 was expanded by 130MB 00:04:16.916 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.916 EAL: request: mp_malloc_sync 00:04:16.916 EAL: No shared files mode enabled, IPC is disabled 00:04:16.916 EAL: Heap on socket 0 was shrunk by 130MB 00:04:16.916 EAL: Trying to obtain current memory policy. 00:04:16.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:16.916 EAL: Restoring previous memory policy: 4 00:04:16.916 EAL: Calling mem event callback 'spdk:(nil)' 00:04:16.916 EAL: request: mp_malloc_sync 00:04:16.916 EAL: No shared files mode enabled, IPC is disabled 00:04:16.916 EAL: Heap on socket 0 was expanded by 258MB 00:04:17.176 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.438 EAL: request: mp_malloc_sync 00:04:17.438 EAL: No shared files mode enabled, IPC is disabled 00:04:17.438 EAL: Heap on socket 0 was shrunk by 258MB 00:04:17.699 EAL: Trying to obtain current memory policy. 00:04:17.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:17.699 EAL: Restoring previous memory policy: 4 00:04:17.699 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.699 EAL: request: mp_malloc_sync 00:04:17.699 EAL: No shared files mode enabled, IPC is disabled 00:04:17.699 EAL: Heap on socket 0 was expanded by 514MB 00:04:18.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.265 EAL: request: mp_malloc_sync 00:04:18.265 EAL: No shared files mode enabled, IPC is disabled 00:04:18.265 EAL: Heap on socket 0 was shrunk by 514MB 00:04:18.522 EAL: Trying to obtain current memory policy. 00:04:18.522 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:18.778 EAL: Restoring previous memory policy: 4 00:04:18.778 EAL: Calling mem event callback 'spdk:(nil)' 00:04:18.778 EAL: request: mp_malloc_sync 00:04:18.778 EAL: No shared files mode enabled, IPC is disabled 00:04:18.778 EAL: Heap on socket 0 was expanded by 1026MB 00:04:19.711 EAL: Calling mem event callback 'spdk:(nil)' 00:04:19.711 EAL: request: mp_malloc_sync 00:04:19.711 EAL: No shared files mode enabled, IPC is disabled 00:04:19.711 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:20.647 passed 00:04:20.647 00:04:20.647 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.647 suites 1 1 n/a 0 0 00:04:20.647 tests 2 2 2 0 0 00:04:20.647 asserts 5782 5782 5782 0 n/a 00:04:20.647 00:04:20.647 Elapsed time = 4.381 seconds 00:04:20.647 EAL: Calling mem event callback 'spdk:(nil)' 00:04:20.647 EAL: request: mp_malloc_sync 00:04:20.647 EAL: No shared files mode enabled, IPC is disabled 00:04:20.647 EAL: Heap on socket 0 was shrunk by 2MB 00:04:20.647 EAL: No shared files mode enabled, IPC is disabled 00:04:20.647 EAL: No shared files mode enabled, IPC is disabled 00:04:20.647 EAL: No shared files mode enabled, IPC is disabled 00:04:20.647 00:04:20.647 real 0m4.633s 00:04:20.647 user 0m3.864s 00:04:20.647 sys 0m0.629s 00:04:20.647 08:57:59 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.647 ************************************ 00:04:20.647 END TEST env_vtophys 00:04:20.647 ************************************ 00:04:20.647 08:57:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:20.647 08:57:59 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:20.647 08:57:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.647 08:57:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.647 08:57:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.647 ************************************ 00:04:20.647 START TEST env_pci 00:04:20.647 ************************************ 00:04:20.647 08:57:59 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:20.647 00:04:20.647 00:04:20.647 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.647 http://cunit.sourceforge.net/ 00:04:20.647 00:04:20.647 00:04:20.647 Suite: pci 00:04:20.647 Test: pci_hook ...[2024-11-20 08:57:59.405567] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56989 has claimed it 00:04:20.647 passed 00:04:20.647 00:04:20.647 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.647 suites 1 1 n/a 0 0 00:04:20.647 tests 1 1 1 0 0 00:04:20.647 asserts 25 25 25 0 n/a 00:04:20.647 00:04:20.647 Elapsed time = 0.004 seconds 00:04:20.647 EAL: Cannot find device (10000:00:01.0) 00:04:20.647 EAL: Failed to attach device on primary process 00:04:20.647 00:04:20.647 real 0m0.054s 00:04:20.647 user 0m0.028s 00:04:20.647 sys 0m0.025s 00:04:20.647 08:57:59 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.647 ************************************ 00:04:20.647 END TEST env_pci 00:04:20.647 ************************************ 00:04:20.647 08:57:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:20.647 08:57:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:20.647 08:57:59 env -- env/env.sh@15 -- # uname 00:04:20.647 08:57:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:20.647 08:57:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:20.647 08:57:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.647 08:57:59 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:20.647 08:57:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.647 08:57:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.647 ************************************ 00:04:20.647 START TEST env_dpdk_post_init 00:04:20.647 ************************************ 00:04:20.647 08:57:59 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:20.647 EAL: Detected CPU lcores: 10 00:04:20.647 EAL: Detected NUMA nodes: 1 00:04:20.647 EAL: Detected shared linkage of DPDK 00:04:20.647 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:20.647 EAL: Selected IOVA mode 'PA' 00:04:20.906 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:20.906 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:20.906 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:20.906 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:20.906 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:20.906 Starting DPDK initialization... 00:04:20.906 Starting SPDK post initialization... 00:04:20.906 SPDK NVMe probe 00:04:20.906 Attaching to 0000:00:10.0 00:04:20.906 Attaching to 0000:00:11.0 00:04:20.906 Attaching to 0000:00:12.0 00:04:20.906 Attaching to 0000:00:13.0 00:04:20.906 Attached to 0000:00:10.0 00:04:20.906 Attached to 0000:00:11.0 00:04:20.906 Attached to 0000:00:13.0 00:04:20.906 Attached to 0000:00:12.0 00:04:20.906 Cleaning up... 00:04:20.906 00:04:20.906 real 0m0.238s 00:04:20.906 user 0m0.070s 00:04:20.906 sys 0m0.070s 00:04:20.906 08:57:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.906 ************************************ 00:04:20.906 08:57:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.906 END TEST env_dpdk_post_init 00:04:20.906 ************************************ 00:04:20.906 08:57:59 env -- env/env.sh@26 -- # uname 00:04:20.906 08:57:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:20.906 08:57:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:20.906 08:57:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.906 08:57:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.906 08:57:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.907 ************************************ 00:04:20.907 START TEST env_mem_callbacks 00:04:20.907 ************************************ 00:04:20.907 08:57:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:21.168 EAL: Detected CPU lcores: 10 00:04:21.168 EAL: Detected NUMA nodes: 1 00:04:21.168 EAL: Detected shared linkage of DPDK 00:04:21.168 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:21.168 EAL: Selected IOVA mode 'PA' 00:04:21.168 00:04:21.168 00:04:21.168 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.168 http://cunit.sourceforge.net/ 00:04:21.168 00:04:21.168 00:04:21.168 Suite: memory 00:04:21.168 Test: test ... 00:04:21.168 register 0x200000200000 2097152 00:04:21.168 malloc 3145728 00:04:21.168 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:21.168 register 0x200000400000 4194304 00:04:21.168 buf 0x2000004fffc0 len 3145728 PASSED 00:04:21.168 malloc 64 00:04:21.168 buf 0x2000004ffec0 len 64 PASSED 00:04:21.168 malloc 4194304 00:04:21.168 register 0x200000800000 6291456 00:04:21.168 buf 0x2000009fffc0 len 4194304 PASSED 00:04:21.168 free 0x2000004fffc0 3145728 00:04:21.168 free 0x2000004ffec0 64 00:04:21.168 unregister 0x200000400000 4194304 PASSED 00:04:21.168 free 0x2000009fffc0 4194304 00:04:21.168 unregister 0x200000800000 6291456 PASSED 00:04:21.168 malloc 8388608 00:04:21.168 register 0x200000400000 10485760 00:04:21.168 buf 0x2000005fffc0 len 8388608 PASSED 00:04:21.168 free 0x2000005fffc0 8388608 00:04:21.168 unregister 0x200000400000 10485760 PASSED 00:04:21.168 passed 00:04:21.168 00:04:21.168 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.168 suites 1 1 n/a 0 0 00:04:21.168 tests 1 1 1 0 0 00:04:21.168 asserts 15 15 15 0 n/a 00:04:21.168 00:04:21.168 Elapsed time = 0.039 seconds 00:04:21.168 00:04:21.168 real 0m0.197s 00:04:21.168 user 0m0.056s 00:04:21.168 sys 0m0.040s 00:04:21.168 08:57:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.168 ************************************ 00:04:21.168 END TEST env_mem_callbacks 00:04:21.168 ************************************ 00:04:21.168 08:57:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:21.168 00:04:21.168 real 0m5.827s 00:04:21.168 user 0m4.430s 00:04:21.168 sys 0m0.997s 00:04:21.168 08:58:00 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.168 08:58:00 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.168 ************************************ 00:04:21.168 END TEST env 00:04:21.168 ************************************ 00:04:21.430 08:58:00 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.430 08:58:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.430 08:58:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.430 08:58:00 -- common/autotest_common.sh@10 -- # set +x 00:04:21.430 ************************************ 00:04:21.430 START TEST rpc 00:04:21.430 ************************************ 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:21.430 * Looking for test storage... 00:04:21.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.430 08:58:00 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.430 08:58:00 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.430 08:58:00 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.430 08:58:00 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.430 08:58:00 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.430 08:58:00 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:21.430 08:58:00 rpc -- scripts/common.sh@345 -- # : 1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.430 08:58:00 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.430 08:58:00 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@353 -- # local d=1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.430 08:58:00 rpc -- scripts/common.sh@355 -- # echo 1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.430 08:58:00 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@353 -- # local d=2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.430 08:58:00 rpc -- scripts/common.sh@355 -- # echo 2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.430 08:58:00 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.430 08:58:00 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.430 08:58:00 rpc -- scripts/common.sh@368 -- # return 0 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:21.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.430 --rc genhtml_branch_coverage=1 00:04:21.430 --rc genhtml_function_coverage=1 00:04:21.430 --rc genhtml_legend=1 00:04:21.430 --rc geninfo_all_blocks=1 00:04:21.430 --rc geninfo_unexecuted_blocks=1 00:04:21.430 00:04:21.430 ' 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:21.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.430 --rc genhtml_branch_coverage=1 00:04:21.430 --rc genhtml_function_coverage=1 00:04:21.430 --rc genhtml_legend=1 00:04:21.430 --rc geninfo_all_blocks=1 00:04:21.430 --rc geninfo_unexecuted_blocks=1 00:04:21.430 00:04:21.430 ' 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:21.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.430 --rc genhtml_branch_coverage=1 00:04:21.430 --rc genhtml_function_coverage=1 00:04:21.430 --rc genhtml_legend=1 00:04:21.430 --rc geninfo_all_blocks=1 00:04:21.430 --rc geninfo_unexecuted_blocks=1 00:04:21.430 00:04:21.430 ' 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:21.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.430 --rc genhtml_branch_coverage=1 00:04:21.430 --rc genhtml_function_coverage=1 00:04:21.430 --rc genhtml_legend=1 00:04:21.430 --rc geninfo_all_blocks=1 00:04:21.430 --rc geninfo_unexecuted_blocks=1 00:04:21.430 00:04:21.430 ' 00:04:21.430 08:58:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57111 00:04:21.430 08:58:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.430 08:58:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57111 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@835 -- # '[' -z 57111 ']' 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.430 08:58:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.430 08:58:00 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:21.430 [2024-11-20 08:58:00.327992] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:21.430 [2024-11-20 08:58:00.328119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57111 ] 00:04:21.691 [2024-11-20 08:58:00.487644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.952 [2024-11-20 08:58:00.613670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:21.952 [2024-11-20 08:58:00.613733] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57111' to capture a snapshot of events at runtime. 00:04:21.952 [2024-11-20 08:58:00.613744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:21.952 [2024-11-20 08:58:00.613755] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:21.952 [2024-11-20 08:58:00.613764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57111 for offline analysis/debug. 00:04:21.952 [2024-11-20 08:58:00.614704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.528 08:58:01 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.528 08:58:01 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:22.528 08:58:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.528 08:58:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:22.528 08:58:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:22.528 08:58:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:22.528 08:58:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.528 08:58:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.528 08:58:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.528 ************************************ 00:04:22.528 START TEST rpc_integrity 00:04:22.528 ************************************ 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:22.528 { 00:04:22.528 "name": "Malloc0", 00:04:22.528 "aliases": [ 00:04:22.528 "0dcdeb90-dac9-4830-9053-3c19f8be73e7" 00:04:22.528 ], 00:04:22.528 "product_name": "Malloc disk", 00:04:22.528 "block_size": 512, 00:04:22.528 "num_blocks": 16384, 00:04:22.528 "uuid": "0dcdeb90-dac9-4830-9053-3c19f8be73e7", 00:04:22.528 "assigned_rate_limits": { 00:04:22.528 "rw_ios_per_sec": 0, 00:04:22.528 "rw_mbytes_per_sec": 0, 00:04:22.528 "r_mbytes_per_sec": 0, 00:04:22.528 "w_mbytes_per_sec": 0 00:04:22.528 }, 00:04:22.528 "claimed": false, 00:04:22.528 "zoned": false, 00:04:22.528 "supported_io_types": { 00:04:22.528 "read": true, 00:04:22.528 "write": true, 00:04:22.528 "unmap": true, 00:04:22.528 "flush": true, 00:04:22.528 "reset": true, 00:04:22.528 "nvme_admin": false, 00:04:22.528 "nvme_io": false, 00:04:22.528 "nvme_io_md": false, 00:04:22.528 "write_zeroes": true, 00:04:22.528 "zcopy": true, 00:04:22.528 "get_zone_info": false, 00:04:22.528 "zone_management": false, 00:04:22.528 "zone_append": false, 00:04:22.528 "compare": false, 00:04:22.528 "compare_and_write": false, 00:04:22.528 "abort": true, 00:04:22.528 "seek_hole": false, 00:04:22.528 "seek_data": false, 00:04:22.528 "copy": true, 00:04:22.528 "nvme_iov_md": false 00:04:22.528 }, 00:04:22.528 "memory_domains": [ 00:04:22.528 { 00:04:22.528 "dma_device_id": "system", 00:04:22.528 "dma_device_type": 1 00:04:22.528 }, 00:04:22.528 { 00:04:22.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.528 "dma_device_type": 2 00:04:22.528 } 00:04:22.528 ], 00:04:22.528 "driver_specific": {} 00:04:22.528 } 00:04:22.528 ]' 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.528 [2024-11-20 08:58:01.436903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:22.528 [2024-11-20 08:58:01.436975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:22.528 [2024-11-20 08:58:01.437006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:22.528 [2024-11-20 08:58:01.437019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:22.528 [2024-11-20 08:58:01.439571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:22.528 [2024-11-20 08:58:01.439627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:22.528 Passthru0 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.528 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.528 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.790 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.790 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:22.790 { 00:04:22.790 "name": "Malloc0", 00:04:22.790 "aliases": [ 00:04:22.790 "0dcdeb90-dac9-4830-9053-3c19f8be73e7" 00:04:22.790 ], 00:04:22.790 "product_name": "Malloc disk", 00:04:22.790 "block_size": 512, 00:04:22.790 "num_blocks": 16384, 00:04:22.790 "uuid": "0dcdeb90-dac9-4830-9053-3c19f8be73e7", 00:04:22.790 "assigned_rate_limits": { 00:04:22.790 "rw_ios_per_sec": 0, 00:04:22.790 "rw_mbytes_per_sec": 0, 00:04:22.790 "r_mbytes_per_sec": 0, 00:04:22.790 "w_mbytes_per_sec": 0 00:04:22.790 }, 00:04:22.790 "claimed": true, 00:04:22.790 "claim_type": "exclusive_write", 00:04:22.790 "zoned": false, 00:04:22.790 "supported_io_types": { 00:04:22.790 "read": true, 00:04:22.790 "write": true, 00:04:22.790 "unmap": true, 00:04:22.790 "flush": true, 00:04:22.790 "reset": true, 00:04:22.790 "nvme_admin": false, 00:04:22.790 "nvme_io": false, 00:04:22.790 "nvme_io_md": false, 00:04:22.790 "write_zeroes": true, 00:04:22.790 "zcopy": true, 00:04:22.790 "get_zone_info": false, 00:04:22.790 "zone_management": false, 00:04:22.790 "zone_append": false, 00:04:22.790 "compare": false, 00:04:22.790 "compare_and_write": false, 00:04:22.790 "abort": true, 00:04:22.790 "seek_hole": false, 00:04:22.790 "seek_data": false, 00:04:22.790 "copy": true, 00:04:22.790 "nvme_iov_md": false 00:04:22.790 }, 00:04:22.790 "memory_domains": [ 00:04:22.790 { 00:04:22.790 "dma_device_id": "system", 00:04:22.790 "dma_device_type": 1 00:04:22.790 }, 00:04:22.790 { 00:04:22.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.790 "dma_device_type": 2 00:04:22.790 } 00:04:22.790 ], 00:04:22.790 "driver_specific": {} 00:04:22.790 }, 00:04:22.790 { 00:04:22.790 "name": "Passthru0", 00:04:22.790 "aliases": [ 00:04:22.790 "8dff99fe-f048-5caa-88e4-ed359e0fe521" 00:04:22.790 ], 00:04:22.790 "product_name": "passthru", 00:04:22.790 "block_size": 512, 00:04:22.790 "num_blocks": 16384, 00:04:22.790 "uuid": "8dff99fe-f048-5caa-88e4-ed359e0fe521", 00:04:22.790 "assigned_rate_limits": { 00:04:22.790 "rw_ios_per_sec": 0, 00:04:22.790 "rw_mbytes_per_sec": 0, 00:04:22.790 "r_mbytes_per_sec": 0, 00:04:22.790 "w_mbytes_per_sec": 0 00:04:22.790 }, 00:04:22.790 "claimed": false, 00:04:22.790 "zoned": false, 00:04:22.790 "supported_io_types": { 00:04:22.790 "read": true, 00:04:22.790 "write": true, 00:04:22.790 "unmap": true, 00:04:22.790 "flush": true, 00:04:22.790 "reset": true, 00:04:22.790 "nvme_admin": false, 00:04:22.790 "nvme_io": false, 00:04:22.790 "nvme_io_md": false, 00:04:22.790 "write_zeroes": true, 00:04:22.790 "zcopy": true, 00:04:22.790 "get_zone_info": false, 00:04:22.791 "zone_management": false, 00:04:22.791 "zone_append": false, 00:04:22.791 "compare": false, 00:04:22.791 "compare_and_write": false, 00:04:22.791 "abort": true, 00:04:22.791 "seek_hole": false, 00:04:22.791 "seek_data": false, 00:04:22.791 "copy": true, 00:04:22.791 "nvme_iov_md": false 00:04:22.791 }, 00:04:22.791 "memory_domains": [ 00:04:22.791 { 00:04:22.791 "dma_device_id": "system", 00:04:22.791 "dma_device_type": 1 00:04:22.791 }, 00:04:22.791 { 00:04:22.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.791 "dma_device_type": 2 00:04:22.791 } 00:04:22.791 ], 00:04:22.791 "driver_specific": { 00:04:22.791 "passthru": { 00:04:22.791 "name": "Passthru0", 00:04:22.791 "base_bdev_name": "Malloc0" 00:04:22.791 } 00:04:22.791 } 00:04:22.791 } 00:04:22.791 ]' 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:22.791 08:58:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:22.791 00:04:22.791 real 0m0.251s 00:04:22.791 user 0m0.129s 00:04:22.791 sys 0m0.030s 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 ************************************ 00:04:22.791 END TEST rpc_integrity 00:04:22.791 ************************************ 00:04:22.791 08:58:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:22.791 08:58:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.791 08:58:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.791 08:58:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 ************************************ 00:04:22.791 START TEST rpc_plugins 00:04:22.791 ************************************ 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:22.791 { 00:04:22.791 "name": "Malloc1", 00:04:22.791 "aliases": [ 00:04:22.791 "e0cd0a06-e27f-490a-bdf4-1877f21eba85" 00:04:22.791 ], 00:04:22.791 "product_name": "Malloc disk", 00:04:22.791 "block_size": 4096, 00:04:22.791 "num_blocks": 256, 00:04:22.791 "uuid": "e0cd0a06-e27f-490a-bdf4-1877f21eba85", 00:04:22.791 "assigned_rate_limits": { 00:04:22.791 "rw_ios_per_sec": 0, 00:04:22.791 "rw_mbytes_per_sec": 0, 00:04:22.791 "r_mbytes_per_sec": 0, 00:04:22.791 "w_mbytes_per_sec": 0 00:04:22.791 }, 00:04:22.791 "claimed": false, 00:04:22.791 "zoned": false, 00:04:22.791 "supported_io_types": { 00:04:22.791 "read": true, 00:04:22.791 "write": true, 00:04:22.791 "unmap": true, 00:04:22.791 "flush": true, 00:04:22.791 "reset": true, 00:04:22.791 "nvme_admin": false, 00:04:22.791 "nvme_io": false, 00:04:22.791 "nvme_io_md": false, 00:04:22.791 "write_zeroes": true, 00:04:22.791 "zcopy": true, 00:04:22.791 "get_zone_info": false, 00:04:22.791 "zone_management": false, 00:04:22.791 "zone_append": false, 00:04:22.791 "compare": false, 00:04:22.791 "compare_and_write": false, 00:04:22.791 "abort": true, 00:04:22.791 "seek_hole": false, 00:04:22.791 "seek_data": false, 00:04:22.791 "copy": true, 00:04:22.791 "nvme_iov_md": false 00:04:22.791 }, 00:04:22.791 "memory_domains": [ 00:04:22.791 { 00:04:22.791 "dma_device_id": "system", 00:04:22.791 "dma_device_type": 1 00:04:22.791 }, 00:04:22.791 { 00:04:22.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:22.791 "dma_device_type": 2 00:04:22.791 } 00:04:22.791 ], 00:04:22.791 "driver_specific": {} 00:04:22.791 } 00:04:22.791 ]' 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:22.791 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.791 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.054 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.054 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:23.054 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.054 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.054 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.054 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:23.054 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:23.054 08:58:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:23.054 00:04:23.054 real 0m0.119s 00:04:23.054 user 0m0.062s 00:04:23.054 sys 0m0.018s 00:04:23.054 ************************************ 00:04:23.054 END TEST rpc_plugins 00:04:23.054 ************************************ 00:04:23.054 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.054 08:58:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:23.054 08:58:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:23.054 08:58:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.054 08:58:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.054 08:58:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.054 ************************************ 00:04:23.054 START TEST rpc_trace_cmd_test 00:04:23.054 ************************************ 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:23.054 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57111", 00:04:23.054 "tpoint_group_mask": "0x8", 00:04:23.054 "iscsi_conn": { 00:04:23.054 "mask": "0x2", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "scsi": { 00:04:23.054 "mask": "0x4", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "bdev": { 00:04:23.054 "mask": "0x8", 00:04:23.054 "tpoint_mask": "0xffffffffffffffff" 00:04:23.054 }, 00:04:23.054 "nvmf_rdma": { 00:04:23.054 "mask": "0x10", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "nvmf_tcp": { 00:04:23.054 "mask": "0x20", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "ftl": { 00:04:23.054 "mask": "0x40", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "blobfs": { 00:04:23.054 "mask": "0x80", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "dsa": { 00:04:23.054 "mask": "0x200", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "thread": { 00:04:23.054 "mask": "0x400", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "nvme_pcie": { 00:04:23.054 "mask": "0x800", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "iaa": { 00:04:23.054 "mask": "0x1000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "nvme_tcp": { 00:04:23.054 "mask": "0x2000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "bdev_nvme": { 00:04:23.054 "mask": "0x4000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "sock": { 00:04:23.054 "mask": "0x8000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "blob": { 00:04:23.054 "mask": "0x10000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "bdev_raid": { 00:04:23.054 "mask": "0x20000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 }, 00:04:23.054 "scheduler": { 00:04:23.054 "mask": "0x40000", 00:04:23.054 "tpoint_mask": "0x0" 00:04:23.054 } 00:04:23.054 }' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:23.054 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:23.316 08:58:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:23.316 08:58:02 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:23.316 00:04:23.316 real 0m0.181s 00:04:23.316 user 0m0.144s 00:04:23.316 sys 0m0.025s 00:04:23.316 ************************************ 00:04:23.316 08:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.316 08:58:02 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:23.316 END TEST rpc_trace_cmd_test 00:04:23.316 ************************************ 00:04:23.316 08:58:02 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:23.316 08:58:02 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:23.316 08:58:02 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:23.316 08:58:02 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.316 08:58:02 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.316 08:58:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.316 ************************************ 00:04:23.316 START TEST rpc_daemon_integrity 00:04:23.316 ************************************ 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.316 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:23.316 { 00:04:23.316 "name": "Malloc2", 00:04:23.316 "aliases": [ 00:04:23.316 "770e9435-63d8-40d6-8818-962fa62eb638" 00:04:23.316 ], 00:04:23.316 "product_name": "Malloc disk", 00:04:23.316 "block_size": 512, 00:04:23.316 "num_blocks": 16384, 00:04:23.316 "uuid": "770e9435-63d8-40d6-8818-962fa62eb638", 00:04:23.316 "assigned_rate_limits": { 00:04:23.316 "rw_ios_per_sec": 0, 00:04:23.316 "rw_mbytes_per_sec": 0, 00:04:23.316 "r_mbytes_per_sec": 0, 00:04:23.316 "w_mbytes_per_sec": 0 00:04:23.316 }, 00:04:23.316 "claimed": false, 00:04:23.316 "zoned": false, 00:04:23.316 "supported_io_types": { 00:04:23.316 "read": true, 00:04:23.316 "write": true, 00:04:23.316 "unmap": true, 00:04:23.316 "flush": true, 00:04:23.316 "reset": true, 00:04:23.316 "nvme_admin": false, 00:04:23.316 "nvme_io": false, 00:04:23.316 "nvme_io_md": false, 00:04:23.316 "write_zeroes": true, 00:04:23.316 "zcopy": true, 00:04:23.316 "get_zone_info": false, 00:04:23.316 "zone_management": false, 00:04:23.316 "zone_append": false, 00:04:23.316 "compare": false, 00:04:23.317 "compare_and_write": false, 00:04:23.317 "abort": true, 00:04:23.317 "seek_hole": false, 00:04:23.317 "seek_data": false, 00:04:23.317 "copy": true, 00:04:23.317 "nvme_iov_md": false 00:04:23.317 }, 00:04:23.317 "memory_domains": [ 00:04:23.317 { 00:04:23.317 "dma_device_id": "system", 00:04:23.317 "dma_device_type": 1 00:04:23.317 }, 00:04:23.317 { 00:04:23.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.317 "dma_device_type": 2 00:04:23.317 } 00:04:23.317 ], 00:04:23.317 "driver_specific": {} 00:04:23.317 } 00:04:23.317 ]' 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.317 [2024-11-20 08:58:02.184238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:23.317 [2024-11-20 08:58:02.184310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:23.317 [2024-11-20 08:58:02.184336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:23.317 [2024-11-20 08:58:02.184350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:23.317 [2024-11-20 08:58:02.186904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:23.317 [2024-11-20 08:58:02.186950] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:23.317 Passthru0 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:23.317 { 00:04:23.317 "name": "Malloc2", 00:04:23.317 "aliases": [ 00:04:23.317 "770e9435-63d8-40d6-8818-962fa62eb638" 00:04:23.317 ], 00:04:23.317 "product_name": "Malloc disk", 00:04:23.317 "block_size": 512, 00:04:23.317 "num_blocks": 16384, 00:04:23.317 "uuid": "770e9435-63d8-40d6-8818-962fa62eb638", 00:04:23.317 "assigned_rate_limits": { 00:04:23.317 "rw_ios_per_sec": 0, 00:04:23.317 "rw_mbytes_per_sec": 0, 00:04:23.317 "r_mbytes_per_sec": 0, 00:04:23.317 "w_mbytes_per_sec": 0 00:04:23.317 }, 00:04:23.317 "claimed": true, 00:04:23.317 "claim_type": "exclusive_write", 00:04:23.317 "zoned": false, 00:04:23.317 "supported_io_types": { 00:04:23.317 "read": true, 00:04:23.317 "write": true, 00:04:23.317 "unmap": true, 00:04:23.317 "flush": true, 00:04:23.317 "reset": true, 00:04:23.317 "nvme_admin": false, 00:04:23.317 "nvme_io": false, 00:04:23.317 "nvme_io_md": false, 00:04:23.317 "write_zeroes": true, 00:04:23.317 "zcopy": true, 00:04:23.317 "get_zone_info": false, 00:04:23.317 "zone_management": false, 00:04:23.317 "zone_append": false, 00:04:23.317 "compare": false, 00:04:23.317 "compare_and_write": false, 00:04:23.317 "abort": true, 00:04:23.317 "seek_hole": false, 00:04:23.317 "seek_data": false, 00:04:23.317 "copy": true, 00:04:23.317 "nvme_iov_md": false 00:04:23.317 }, 00:04:23.317 "memory_domains": [ 00:04:23.317 { 00:04:23.317 "dma_device_id": "system", 00:04:23.317 "dma_device_type": 1 00:04:23.317 }, 00:04:23.317 { 00:04:23.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.317 "dma_device_type": 2 00:04:23.317 } 00:04:23.317 ], 00:04:23.317 "driver_specific": {} 00:04:23.317 }, 00:04:23.317 { 00:04:23.317 "name": "Passthru0", 00:04:23.317 "aliases": [ 00:04:23.317 "7daecbe3-d05d-5cde-ae37-dd409d4e7fd2" 00:04:23.317 ], 00:04:23.317 "product_name": "passthru", 00:04:23.317 "block_size": 512, 00:04:23.317 "num_blocks": 16384, 00:04:23.317 "uuid": "7daecbe3-d05d-5cde-ae37-dd409d4e7fd2", 00:04:23.317 "assigned_rate_limits": { 00:04:23.317 "rw_ios_per_sec": 0, 00:04:23.317 "rw_mbytes_per_sec": 0, 00:04:23.317 "r_mbytes_per_sec": 0, 00:04:23.317 "w_mbytes_per_sec": 0 00:04:23.317 }, 00:04:23.317 "claimed": false, 00:04:23.317 "zoned": false, 00:04:23.317 "supported_io_types": { 00:04:23.317 "read": true, 00:04:23.317 "write": true, 00:04:23.317 "unmap": true, 00:04:23.317 "flush": true, 00:04:23.317 "reset": true, 00:04:23.317 "nvme_admin": false, 00:04:23.317 "nvme_io": false, 00:04:23.317 "nvme_io_md": false, 00:04:23.317 "write_zeroes": true, 00:04:23.317 "zcopy": true, 00:04:23.317 "get_zone_info": false, 00:04:23.317 "zone_management": false, 00:04:23.317 "zone_append": false, 00:04:23.317 "compare": false, 00:04:23.317 "compare_and_write": false, 00:04:23.317 "abort": true, 00:04:23.317 "seek_hole": false, 00:04:23.317 "seek_data": false, 00:04:23.317 "copy": true, 00:04:23.317 "nvme_iov_md": false 00:04:23.317 }, 00:04:23.317 "memory_domains": [ 00:04:23.317 { 00:04:23.317 "dma_device_id": "system", 00:04:23.317 "dma_device_type": 1 00:04:23.317 }, 00:04:23.317 { 00:04:23.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:23.317 "dma_device_type": 2 00:04:23.317 } 00:04:23.317 ], 00:04:23.317 "driver_specific": { 00:04:23.317 "passthru": { 00:04:23.317 "name": "Passthru0", 00:04:23.317 "base_bdev_name": "Malloc2" 00:04:23.317 } 00:04:23.317 } 00:04:23.317 } 00:04:23.317 ]' 00:04:23.317 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:23.620 00:04:23.620 real 0m0.250s 00:04:23.620 user 0m0.132s 00:04:23.620 sys 0m0.032s 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.620 08:58:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:23.620 ************************************ 00:04:23.620 END TEST rpc_daemon_integrity 00:04:23.620 ************************************ 00:04:23.620 08:58:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:23.620 08:58:02 rpc -- rpc/rpc.sh@84 -- # killprocess 57111 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@954 -- # '[' -z 57111 ']' 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@958 -- # kill -0 57111 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57111 00:04:23.620 killing process with pid 57111 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57111' 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@973 -- # kill 57111 00:04:23.620 08:58:02 rpc -- common/autotest_common.sh@978 -- # wait 57111 00:04:25.547 00:04:25.547 real 0m3.947s 00:04:25.547 user 0m4.273s 00:04:25.547 sys 0m0.736s 00:04:25.547 08:58:04 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.547 08:58:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.547 ************************************ 00:04:25.547 END TEST rpc 00:04:25.547 ************************************ 00:04:25.547 08:58:04 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.547 08:58:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.547 08:58:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.547 08:58:04 -- common/autotest_common.sh@10 -- # set +x 00:04:25.547 ************************************ 00:04:25.547 START TEST skip_rpc 00:04:25.547 ************************************ 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:25.547 * Looking for test storage... 00:04:25.547 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.547 08:58:04 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.547 --rc genhtml_branch_coverage=1 00:04:25.547 --rc genhtml_function_coverage=1 00:04:25.547 --rc genhtml_legend=1 00:04:25.547 --rc geninfo_all_blocks=1 00:04:25.547 --rc geninfo_unexecuted_blocks=1 00:04:25.547 00:04:25.547 ' 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.547 --rc genhtml_branch_coverage=1 00:04:25.547 --rc genhtml_function_coverage=1 00:04:25.547 --rc genhtml_legend=1 00:04:25.547 --rc geninfo_all_blocks=1 00:04:25.547 --rc geninfo_unexecuted_blocks=1 00:04:25.547 00:04:25.547 ' 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.547 --rc genhtml_branch_coverage=1 00:04:25.547 --rc genhtml_function_coverage=1 00:04:25.547 --rc genhtml_legend=1 00:04:25.547 --rc geninfo_all_blocks=1 00:04:25.547 --rc geninfo_unexecuted_blocks=1 00:04:25.547 00:04:25.547 ' 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.547 --rc genhtml_branch_coverage=1 00:04:25.547 --rc genhtml_function_coverage=1 00:04:25.547 --rc genhtml_legend=1 00:04:25.547 --rc geninfo_all_blocks=1 00:04:25.547 --rc geninfo_unexecuted_blocks=1 00:04:25.547 00:04:25.547 ' 00:04:25.547 08:58:04 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.547 08:58:04 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.547 08:58:04 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:25.547 08:58:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.548 08:58:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.548 08:58:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.548 ************************************ 00:04:25.548 START TEST skip_rpc 00:04:25.548 ************************************ 00:04:25.548 08:58:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:25.548 08:58:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57329 00:04:25.548 08:58:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.548 08:58:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:25.548 08:58:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:25.548 [2024-11-20 08:58:04.370561] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:25.548 [2024-11-20 08:58:04.370703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57329 ] 00:04:25.810 [2024-11-20 08:58:04.534578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.810 [2024-11-20 08:58:04.658755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57329 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57329 ']' 00:04:31.097 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57329 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57329 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:31.098 killing process with pid 57329 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57329' 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57329 00:04:31.098 08:58:09 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57329 00:04:31.663 00:04:31.664 real 0m6.221s 00:04:31.664 user 0m5.722s 00:04:31.664 sys 0m0.393s 00:04:31.664 08:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.664 08:58:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.664 ************************************ 00:04:31.664 END TEST skip_rpc 00:04:31.664 ************************************ 00:04:31.664 08:58:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:31.664 08:58:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.664 08:58:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.664 08:58:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.664 ************************************ 00:04:31.664 START TEST skip_rpc_with_json 00:04:31.664 ************************************ 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57422 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57422 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57422 ']' 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.664 08:58:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.924 [2024-11-20 08:58:10.607555] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:31.925 [2024-11-20 08:58:10.607642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57422 ] 00:04:31.925 [2024-11-20 08:58:10.763452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.184 [2024-11-20 08:58:10.860493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.752 [2024-11-20 08:58:11.454508] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:32.752 request: 00:04:32.752 { 00:04:32.752 "trtype": "tcp", 00:04:32.752 "method": "nvmf_get_transports", 00:04:32.752 "req_id": 1 00:04:32.752 } 00:04:32.752 Got JSON-RPC error response 00:04:32.752 response: 00:04:32.752 { 00:04:32.752 "code": -19, 00:04:32.752 "message": "No such device" 00:04:32.752 } 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.752 [2024-11-20 08:58:11.466610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.752 08:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.752 { 00:04:32.752 "subsystems": [ 00:04:32.752 { 00:04:32.752 "subsystem": "fsdev", 00:04:32.752 "config": [ 00:04:32.752 { 00:04:32.752 "method": "fsdev_set_opts", 00:04:32.752 "params": { 00:04:32.752 "fsdev_io_pool_size": 65535, 00:04:32.752 "fsdev_io_cache_size": 256 00:04:32.752 } 00:04:32.752 } 00:04:32.752 ] 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "subsystem": "keyring", 00:04:32.752 "config": [] 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "subsystem": "iobuf", 00:04:32.752 "config": [ 00:04:32.752 { 00:04:32.752 "method": "iobuf_set_options", 00:04:32.752 "params": { 00:04:32.752 "small_pool_count": 8192, 00:04:32.752 "large_pool_count": 1024, 00:04:32.752 "small_bufsize": 8192, 00:04:32.752 "large_bufsize": 135168, 00:04:32.752 "enable_numa": false 00:04:32.752 } 00:04:32.752 } 00:04:32.752 ] 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "subsystem": "sock", 00:04:32.752 "config": [ 00:04:32.752 { 00:04:32.752 "method": "sock_set_default_impl", 00:04:32.752 "params": { 00:04:32.752 "impl_name": "posix" 00:04:32.752 } 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "method": "sock_impl_set_options", 00:04:32.752 "params": { 00:04:32.752 "impl_name": "ssl", 00:04:32.752 "recv_buf_size": 4096, 00:04:32.752 "send_buf_size": 4096, 00:04:32.752 "enable_recv_pipe": true, 00:04:32.752 "enable_quickack": false, 00:04:32.752 "enable_placement_id": 0, 00:04:32.752 "enable_zerocopy_send_server": true, 00:04:32.752 "enable_zerocopy_send_client": false, 00:04:32.752 "zerocopy_threshold": 0, 00:04:32.752 "tls_version": 0, 00:04:32.752 "enable_ktls": false 00:04:32.752 } 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "method": "sock_impl_set_options", 00:04:32.752 "params": { 00:04:32.752 "impl_name": "posix", 00:04:32.752 "recv_buf_size": 2097152, 00:04:32.752 "send_buf_size": 2097152, 00:04:32.752 "enable_recv_pipe": true, 00:04:32.752 "enable_quickack": false, 00:04:32.752 "enable_placement_id": 0, 00:04:32.752 "enable_zerocopy_send_server": true, 00:04:32.752 "enable_zerocopy_send_client": false, 00:04:32.752 "zerocopy_threshold": 0, 00:04:32.752 "tls_version": 0, 00:04:32.752 "enable_ktls": false 00:04:32.752 } 00:04:32.752 } 00:04:32.752 ] 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "subsystem": "vmd", 00:04:32.752 "config": [] 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "subsystem": "accel", 00:04:32.752 "config": [ 00:04:32.752 { 00:04:32.752 "method": "accel_set_options", 00:04:32.752 "params": { 00:04:32.752 "small_cache_size": 128, 00:04:32.752 "large_cache_size": 16, 00:04:32.752 "task_count": 2048, 00:04:32.752 "sequence_count": 2048, 00:04:32.752 "buf_count": 2048 00:04:32.752 } 00:04:32.752 } 00:04:32.752 ] 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "subsystem": "bdev", 00:04:32.752 "config": [ 00:04:32.752 { 00:04:32.752 "method": "bdev_set_options", 00:04:32.752 "params": { 00:04:32.752 "bdev_io_pool_size": 65535, 00:04:32.752 "bdev_io_cache_size": 256, 00:04:32.752 "bdev_auto_examine": true, 00:04:32.752 "iobuf_small_cache_size": 128, 00:04:32.752 "iobuf_large_cache_size": 16 00:04:32.752 } 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "method": "bdev_raid_set_options", 00:04:32.752 "params": { 00:04:32.752 "process_window_size_kb": 1024, 00:04:32.752 "process_max_bandwidth_mb_sec": 0 00:04:32.752 } 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "method": "bdev_iscsi_set_options", 00:04:32.752 "params": { 00:04:32.752 "timeout_sec": 30 00:04:32.752 } 00:04:32.752 }, 00:04:32.752 { 00:04:32.752 "method": "bdev_nvme_set_options", 00:04:32.752 "params": { 00:04:32.752 "action_on_timeout": "none", 00:04:32.752 "timeout_us": 0, 00:04:32.752 "timeout_admin_us": 0, 00:04:32.752 "keep_alive_timeout_ms": 10000, 00:04:32.752 "arbitration_burst": 0, 00:04:32.752 "low_priority_weight": 0, 00:04:32.752 "medium_priority_weight": 0, 00:04:32.752 "high_priority_weight": 0, 00:04:32.752 "nvme_adminq_poll_period_us": 10000, 00:04:32.753 "nvme_ioq_poll_period_us": 0, 00:04:32.753 "io_queue_requests": 0, 00:04:32.753 "delay_cmd_submit": true, 00:04:32.753 "transport_retry_count": 4, 00:04:32.753 "bdev_retry_count": 3, 00:04:32.753 "transport_ack_timeout": 0, 00:04:32.753 "ctrlr_loss_timeout_sec": 0, 00:04:32.753 "reconnect_delay_sec": 0, 00:04:32.753 "fast_io_fail_timeout_sec": 0, 00:04:32.753 "disable_auto_failback": false, 00:04:32.753 "generate_uuids": false, 00:04:32.753 "transport_tos": 0, 00:04:32.753 "nvme_error_stat": false, 00:04:32.753 "rdma_srq_size": 0, 00:04:32.753 "io_path_stat": false, 00:04:32.753 "allow_accel_sequence": false, 00:04:32.753 "rdma_max_cq_size": 0, 00:04:32.753 "rdma_cm_event_timeout_ms": 0, 00:04:32.753 "dhchap_digests": [ 00:04:32.753 "sha256", 00:04:32.753 "sha384", 00:04:32.753 "sha512" 00:04:32.753 ], 00:04:32.753 "dhchap_dhgroups": [ 00:04:32.753 "null", 00:04:32.753 "ffdhe2048", 00:04:32.753 "ffdhe3072", 00:04:32.753 "ffdhe4096", 00:04:32.753 "ffdhe6144", 00:04:32.753 "ffdhe8192" 00:04:32.753 ] 00:04:32.753 } 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "method": "bdev_nvme_set_hotplug", 00:04:32.753 "params": { 00:04:32.753 "period_us": 100000, 00:04:32.753 "enable": false 00:04:32.753 } 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "method": "bdev_wait_for_examine" 00:04:32.753 } 00:04:32.753 ] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "scsi", 00:04:32.753 "config": null 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "scheduler", 00:04:32.753 "config": [ 00:04:32.753 { 00:04:32.753 "method": "framework_set_scheduler", 00:04:32.753 "params": { 00:04:32.753 "name": "static" 00:04:32.753 } 00:04:32.753 } 00:04:32.753 ] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "vhost_scsi", 00:04:32.753 "config": [] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "vhost_blk", 00:04:32.753 "config": [] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "ublk", 00:04:32.753 "config": [] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "nbd", 00:04:32.753 "config": [] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "nvmf", 00:04:32.753 "config": [ 00:04:32.753 { 00:04:32.753 "method": "nvmf_set_config", 00:04:32.753 "params": { 00:04:32.753 "discovery_filter": "match_any", 00:04:32.753 "admin_cmd_passthru": { 00:04:32.753 "identify_ctrlr": false 00:04:32.753 }, 00:04:32.753 "dhchap_digests": [ 00:04:32.753 "sha256", 00:04:32.753 "sha384", 00:04:32.753 "sha512" 00:04:32.753 ], 00:04:32.753 "dhchap_dhgroups": [ 00:04:32.753 "null", 00:04:32.753 "ffdhe2048", 00:04:32.753 "ffdhe3072", 00:04:32.753 "ffdhe4096", 00:04:32.753 "ffdhe6144", 00:04:32.753 "ffdhe8192" 00:04:32.753 ] 00:04:32.753 } 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "method": "nvmf_set_max_subsystems", 00:04:32.753 "params": { 00:04:32.753 "max_subsystems": 1024 00:04:32.753 } 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "method": "nvmf_set_crdt", 00:04:32.753 "params": { 00:04:32.753 "crdt1": 0, 00:04:32.753 "crdt2": 0, 00:04:32.753 "crdt3": 0 00:04:32.753 } 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "method": "nvmf_create_transport", 00:04:32.753 "params": { 00:04:32.753 "trtype": "TCP", 00:04:32.753 "max_queue_depth": 128, 00:04:32.753 "max_io_qpairs_per_ctrlr": 127, 00:04:32.753 "in_capsule_data_size": 4096, 00:04:32.753 "max_io_size": 131072, 00:04:32.753 "io_unit_size": 131072, 00:04:32.753 "max_aq_depth": 128, 00:04:32.753 "num_shared_buffers": 511, 00:04:32.753 "buf_cache_size": 4294967295, 00:04:32.753 "dif_insert_or_strip": false, 00:04:32.753 "zcopy": false, 00:04:32.753 "c2h_success": true, 00:04:32.753 "sock_priority": 0, 00:04:32.753 "abort_timeout_sec": 1, 00:04:32.753 "ack_timeout": 0, 00:04:32.753 "data_wr_pool_size": 0 00:04:32.753 } 00:04:32.753 } 00:04:32.753 ] 00:04:32.753 }, 00:04:32.753 { 00:04:32.753 "subsystem": "iscsi", 00:04:32.753 "config": [ 00:04:32.753 { 00:04:32.753 "method": "iscsi_set_options", 00:04:32.753 "params": { 00:04:32.753 "node_base": "iqn.2016-06.io.spdk", 00:04:32.753 "max_sessions": 128, 00:04:32.753 "max_connections_per_session": 2, 00:04:32.753 "max_queue_depth": 64, 00:04:32.753 "default_time2wait": 2, 00:04:32.753 "default_time2retain": 20, 00:04:32.753 "first_burst_length": 8192, 00:04:32.753 "immediate_data": true, 00:04:32.753 "allow_duplicated_isid": false, 00:04:32.753 "error_recovery_level": 0, 00:04:32.753 "nop_timeout": 60, 00:04:32.753 "nop_in_interval": 30, 00:04:32.753 "disable_chap": false, 00:04:32.753 "require_chap": false, 00:04:32.753 "mutual_chap": false, 00:04:32.753 "chap_group": 0, 00:04:32.753 "max_large_datain_per_connection": 64, 00:04:32.753 "max_r2t_per_connection": 4, 00:04:32.753 "pdu_pool_size": 36864, 00:04:32.753 "immediate_data_pool_size": 16384, 00:04:32.753 "data_out_pool_size": 2048 00:04:32.753 } 00:04:32.753 } 00:04:32.753 ] 00:04:32.753 } 00:04:32.753 ] 00:04:32.753 } 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57422 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57422 ']' 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57422 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57422 00:04:32.753 killing process with pid 57422 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57422' 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57422 00:04:32.753 08:58:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57422 00:04:34.664 08:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57461 00:04:34.664 08:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.664 08:58:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57461 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57461 ']' 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57461 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57461 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.948 killing process with pid 57461 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57461' 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57461 00:04:39.948 08:58:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57461 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.516 00:04:40.516 real 0m8.801s 00:04:40.516 user 0m8.319s 00:04:40.516 sys 0m0.695s 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:40.516 ************************************ 00:04:40.516 END TEST skip_rpc_with_json 00:04:40.516 ************************************ 00:04:40.516 08:58:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:40.516 08:58:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.516 08:58:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.516 08:58:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.516 ************************************ 00:04:40.516 START TEST skip_rpc_with_delay 00:04:40.516 ************************************ 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:40.516 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:40.778 [2024-11-20 08:58:19.474809] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:40.778 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:40.778 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.778 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.778 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.778 00:04:40.778 real 0m0.121s 00:04:40.778 user 0m0.069s 00:04:40.778 sys 0m0.050s 00:04:40.778 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.778 08:58:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:40.778 ************************************ 00:04:40.778 END TEST skip_rpc_with_delay 00:04:40.778 ************************************ 00:04:40.778 08:58:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:40.778 08:58:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:40.778 08:58:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:40.778 08:58:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.778 08:58:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.778 08:58:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.778 ************************************ 00:04:40.778 START TEST exit_on_failed_rpc_init 00:04:40.778 ************************************ 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57584 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57584 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57584 ']' 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:40.778 08:58:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.778 [2024-11-20 08:58:19.636937] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:40.778 [2024-11-20 08:58:19.637032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57584 ] 00:04:41.038 [2024-11-20 08:58:19.787321] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.038 [2024-11-20 08:58:19.863401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.603 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:41.862 [2024-11-20 08:58:20.552887] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:41.862 [2024-11-20 08:58:20.553004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57602 ] 00:04:41.862 [2024-11-20 08:58:20.709186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.121 [2024-11-20 08:58:20.804558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.121 [2024-11-20 08:58:20.804642] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.121 [2024-11-20 08:58:20.804655] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.121 [2024-11-20 08:58:20.804667] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57584 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57584 ']' 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57584 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.121 08:58:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57584 00:04:42.121 08:58:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.121 08:58:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.121 killing process with pid 57584 00:04:42.121 08:58:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57584' 00:04:42.121 08:58:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57584 00:04:42.121 08:58:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57584 00:04:43.497 00:04:43.497 real 0m2.599s 00:04:43.497 user 0m2.929s 00:04:43.497 sys 0m0.378s 00:04:43.497 08:58:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.497 ************************************ 00:04:43.497 END TEST exit_on_failed_rpc_init 00:04:43.497 ************************************ 00:04:43.497 08:58:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.497 08:58:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.497 00:04:43.497 real 0m18.111s 00:04:43.497 user 0m17.186s 00:04:43.497 sys 0m1.702s 00:04:43.497 08:58:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.497 ************************************ 00:04:43.497 END TEST skip_rpc 00:04:43.497 ************************************ 00:04:43.497 08:58:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.497 08:58:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.497 08:58:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.497 08:58:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.497 08:58:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.497 ************************************ 00:04:43.497 START TEST rpc_client 00:04:43.497 ************************************ 00:04:43.497 08:58:22 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:43.497 * Looking for test storage... 00:04:43.497 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:43.497 08:58:22 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.497 08:58:22 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.497 08:58:22 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.497 08:58:22 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:43.497 08:58:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.758 08:58:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:43.758 OK 00:04:43.758 08:58:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:43.758 00:04:43.758 real 0m0.181s 00:04:43.758 user 0m0.108s 00:04:43.758 sys 0m0.078s 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.758 08:58:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.758 ************************************ 00:04:43.758 END TEST rpc_client 00:04:43.758 ************************************ 00:04:43.758 08:58:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.758 08:58:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.758 08:58:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.758 08:58:22 -- common/autotest_common.sh@10 -- # set +x 00:04:43.758 ************************************ 00:04:43.758 START TEST json_config 00:04:43.758 ************************************ 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.758 08:58:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.758 08:58:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.758 08:58:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.758 08:58:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.758 08:58:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.758 08:58:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:43.758 08:58:22 json_config -- scripts/common.sh@345 -- # : 1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.758 08:58:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.758 08:58:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@353 -- # local d=1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.758 08:58:22 json_config -- scripts/common.sh@355 -- # echo 1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.758 08:58:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@353 -- # local d=2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.758 08:58:22 json_config -- scripts/common.sh@355 -- # echo 2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.758 08:58:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.758 08:58:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.758 08:58:22 json_config -- scripts/common.sh@368 -- # return 0 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.758 --rc genhtml_branch_coverage=1 00:04:43.758 --rc genhtml_function_coverage=1 00:04:43.758 --rc genhtml_legend=1 00:04:43.758 --rc geninfo_all_blocks=1 00:04:43.758 --rc geninfo_unexecuted_blocks=1 00:04:43.758 00:04:43.758 ' 00:04:43.758 08:58:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fd767b7-dbef-47a7-9446-009fc2cf8346 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=3fd767b7-dbef-47a7-9446-009fc2cf8346 00:04:43.758 08:58:22 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.759 08:58:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.759 08:58:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.759 08:58:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.759 08:58:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.759 08:58:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.759 08:58:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.759 08:58:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.759 08:58:22 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.759 08:58:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:43.759 08:58:22 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:43.759 08:58:22 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:43.759 08:58:22 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@50 -- # : 0 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:43.759 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:43.759 08:58:22 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.759 WARNING: No tests are enabled so not running JSON configuration tests 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:43.759 08:58:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:43.759 00:04:43.759 real 0m0.140s 00:04:43.759 user 0m0.097s 00:04:43.759 sys 0m0.047s 00:04:43.759 08:58:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.759 ************************************ 00:04:43.759 END TEST json_config 00:04:43.759 ************************************ 00:04:43.759 08:58:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.020 08:58:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.020 08:58:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.020 08:58:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.020 08:58:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.020 ************************************ 00:04:44.020 START TEST json_config_extra_key 00:04:44.020 ************************************ 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.020 08:58:22 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.020 --rc genhtml_branch_coverage=1 00:04:44.020 --rc genhtml_function_coverage=1 00:04:44.020 --rc genhtml_legend=1 00:04:44.020 --rc geninfo_all_blocks=1 00:04:44.020 --rc geninfo_unexecuted_blocks=1 00:04:44.020 00:04:44.020 ' 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.020 --rc genhtml_branch_coverage=1 00:04:44.020 --rc genhtml_function_coverage=1 00:04:44.020 --rc genhtml_legend=1 00:04:44.020 --rc geninfo_all_blocks=1 00:04:44.020 --rc geninfo_unexecuted_blocks=1 00:04:44.020 00:04:44.020 ' 00:04:44.020 08:58:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:44.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.020 --rc genhtml_branch_coverage=1 00:04:44.020 --rc genhtml_function_coverage=1 00:04:44.020 --rc genhtml_legend=1 00:04:44.020 --rc geninfo_all_blocks=1 00:04:44.021 --rc geninfo_unexecuted_blocks=1 00:04:44.021 00:04:44.021 ' 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:44.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.021 --rc genhtml_branch_coverage=1 00:04:44.021 --rc genhtml_function_coverage=1 00:04:44.021 --rc genhtml_legend=1 00:04:44.021 --rc geninfo_all_blocks=1 00:04:44.021 --rc geninfo_unexecuted_blocks=1 00:04:44.021 00:04:44.021 ' 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3fd767b7-dbef-47a7-9446-009fc2cf8346 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=3fd767b7-dbef-47a7-9446-009fc2cf8346 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@47 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:44.021 08:58:22 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:44.021 08:58:22 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:44.021 08:58:22 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:44.021 08:58:22 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:44.021 08:58:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.021 08:58:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.021 08:58:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.021 08:58:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:44.021 08:58:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/setup.sh 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:44.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:44.021 08:58:22 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:44.021 INFO: launching applications... 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:44.021 08:58:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:44.021 Waiting for target to run... 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57790 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57790 /var/tmp/spdk_tgt.sock 00:04:44.021 08:58:22 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57790 ']' 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:44.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:44.021 08:58:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:44.021 [2024-11-20 08:58:22.920642] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:44.021 [2024-11-20 08:58:22.920774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57790 ] 00:04:44.639 [2024-11-20 08:58:23.287676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.639 [2024-11-20 08:58:23.368748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.898 08:58:23 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.898 08:58:23 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:44.898 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:44.898 INFO: shutting down applications... 00:04:44.898 08:58:23 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:44.898 08:58:23 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57790 ]] 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57790 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57790 00:04:44.898 08:58:23 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.467 08:58:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.467 08:58:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.467 08:58:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57790 00:04:45.467 08:58:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.034 08:58:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.034 08:58:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.034 08:58:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57790 00:04:46.034 08:58:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:46.604 08:58:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:46.605 08:58:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.605 08:58:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57790 00:04:46.605 08:58:25 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:46.605 08:58:25 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:46.605 08:58:25 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:46.605 SPDK target shutdown done 00:04:46.605 08:58:25 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:46.605 Success 00:04:46.605 08:58:25 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:46.605 00:04:46.605 real 0m2.583s 00:04:46.605 user 0m2.301s 00:04:46.605 sys 0m0.421s 00:04:46.605 08:58:25 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.605 08:58:25 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:46.605 ************************************ 00:04:46.605 END TEST json_config_extra_key 00:04:46.605 ************************************ 00:04:46.605 08:58:25 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.605 08:58:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.605 08:58:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.605 08:58:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.605 ************************************ 00:04:46.605 START TEST alias_rpc 00:04:46.605 ************************************ 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:46.605 * Looking for test storage... 00:04:46.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.605 08:58:25 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.605 --rc genhtml_branch_coverage=1 00:04:46.605 --rc genhtml_function_coverage=1 00:04:46.605 --rc genhtml_legend=1 00:04:46.605 --rc geninfo_all_blocks=1 00:04:46.605 --rc geninfo_unexecuted_blocks=1 00:04:46.605 00:04:46.605 ' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.605 --rc genhtml_branch_coverage=1 00:04:46.605 --rc genhtml_function_coverage=1 00:04:46.605 --rc genhtml_legend=1 00:04:46.605 --rc geninfo_all_blocks=1 00:04:46.605 --rc geninfo_unexecuted_blocks=1 00:04:46.605 00:04:46.605 ' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.605 --rc genhtml_branch_coverage=1 00:04:46.605 --rc genhtml_function_coverage=1 00:04:46.605 --rc genhtml_legend=1 00:04:46.605 --rc geninfo_all_blocks=1 00:04:46.605 --rc geninfo_unexecuted_blocks=1 00:04:46.605 00:04:46.605 ' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.605 --rc genhtml_branch_coverage=1 00:04:46.605 --rc genhtml_function_coverage=1 00:04:46.605 --rc genhtml_legend=1 00:04:46.605 --rc geninfo_all_blocks=1 00:04:46.605 --rc geninfo_unexecuted_blocks=1 00:04:46.605 00:04:46.605 ' 00:04:46.605 08:58:25 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:46.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.605 08:58:25 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57876 00:04:46.605 08:58:25 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57876 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57876 ']' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.605 08:58:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.605 08:58:25 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.864 [2024-11-20 08:58:25.560165] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:46.864 [2024-11-20 08:58:25.560290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57876 ] 00:04:46.864 [2024-11-20 08:58:25.718169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.122 [2024-11-20 08:58:25.796723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.687 08:58:26 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.687 08:58:26 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:47.687 08:58:26 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:47.944 08:58:26 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57876 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57876 ']' 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57876 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57876 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.944 killing process with pid 57876 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57876' 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@973 -- # kill 57876 00:04:47.944 08:58:26 alias_rpc -- common/autotest_common.sh@978 -- # wait 57876 00:04:49.319 00:04:49.319 real 0m2.459s 00:04:49.319 user 0m2.583s 00:04:49.319 sys 0m0.371s 00:04:49.319 08:58:27 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.319 ************************************ 00:04:49.319 END TEST alias_rpc 00:04:49.319 ************************************ 00:04:49.319 08:58:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.319 08:58:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:49.319 08:58:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.319 08:58:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.319 08:58:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.319 08:58:27 -- common/autotest_common.sh@10 -- # set +x 00:04:49.319 ************************************ 00:04:49.319 START TEST spdkcli_tcp 00:04:49.319 ************************************ 00:04:49.319 08:58:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.319 * Looking for test storage... 00:04:49.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:49.319 08:58:27 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.319 08:58:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.319 08:58:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.319 08:58:27 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:49.319 08:58:27 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:49.320 08:58:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.320 08:58:27 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.320 08:58:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.320 --rc genhtml_branch_coverage=1 00:04:49.320 --rc genhtml_function_coverage=1 00:04:49.320 --rc genhtml_legend=1 00:04:49.320 --rc geninfo_all_blocks=1 00:04:49.320 --rc geninfo_unexecuted_blocks=1 00:04:49.320 00:04:49.320 ' 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.320 --rc genhtml_branch_coverage=1 00:04:49.320 --rc genhtml_function_coverage=1 00:04:49.320 --rc genhtml_legend=1 00:04:49.320 --rc geninfo_all_blocks=1 00:04:49.320 --rc geninfo_unexecuted_blocks=1 00:04:49.320 00:04:49.320 ' 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.320 --rc genhtml_branch_coverage=1 00:04:49.320 --rc genhtml_function_coverage=1 00:04:49.320 --rc genhtml_legend=1 00:04:49.320 --rc geninfo_all_blocks=1 00:04:49.320 --rc geninfo_unexecuted_blocks=1 00:04:49.320 00:04:49.320 ' 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.320 --rc genhtml_branch_coverage=1 00:04:49.320 --rc genhtml_function_coverage=1 00:04:49.320 --rc genhtml_legend=1 00:04:49.320 --rc geninfo_all_blocks=1 00:04:49.320 --rc geninfo_unexecuted_blocks=1 00:04:49.320 00:04:49.320 ' 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57967 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57967 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57967 ']' 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.320 08:58:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.320 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.320 [2024-11-20 08:58:28.091369] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:49.320 [2024-11-20 08:58:28.091486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57967 ] 00:04:49.578 [2024-11-20 08:58:28.248552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.578 [2024-11-20 08:58:28.329487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.578 [2024-11-20 08:58:28.329611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.143 08:58:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.143 08:58:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:50.143 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57984 00:04:50.143 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.143 08:58:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:50.402 [ 00:04:50.402 "bdev_malloc_delete", 00:04:50.402 "bdev_malloc_create", 00:04:50.402 "bdev_null_resize", 00:04:50.402 "bdev_null_delete", 00:04:50.402 "bdev_null_create", 00:04:50.403 "bdev_nvme_cuse_unregister", 00:04:50.403 "bdev_nvme_cuse_register", 00:04:50.403 "bdev_opal_new_user", 00:04:50.403 "bdev_opal_set_lock_state", 00:04:50.403 "bdev_opal_delete", 00:04:50.403 "bdev_opal_get_info", 00:04:50.403 "bdev_opal_create", 00:04:50.403 "bdev_nvme_opal_revert", 00:04:50.403 "bdev_nvme_opal_init", 00:04:50.403 "bdev_nvme_send_cmd", 00:04:50.403 "bdev_nvme_set_keys", 00:04:50.403 "bdev_nvme_get_path_iostat", 00:04:50.403 "bdev_nvme_get_mdns_discovery_info", 00:04:50.403 "bdev_nvme_stop_mdns_discovery", 00:04:50.403 "bdev_nvme_start_mdns_discovery", 00:04:50.403 "bdev_nvme_set_multipath_policy", 00:04:50.403 "bdev_nvme_set_preferred_path", 00:04:50.403 "bdev_nvme_get_io_paths", 00:04:50.403 "bdev_nvme_remove_error_injection", 00:04:50.403 "bdev_nvme_add_error_injection", 00:04:50.403 "bdev_nvme_get_discovery_info", 00:04:50.403 "bdev_nvme_stop_discovery", 00:04:50.403 "bdev_nvme_start_discovery", 00:04:50.403 "bdev_nvme_get_controller_health_info", 00:04:50.403 "bdev_nvme_disable_controller", 00:04:50.403 "bdev_nvme_enable_controller", 00:04:50.403 "bdev_nvme_reset_controller", 00:04:50.403 "bdev_nvme_get_transport_statistics", 00:04:50.403 "bdev_nvme_apply_firmware", 00:04:50.403 "bdev_nvme_detach_controller", 00:04:50.403 "bdev_nvme_get_controllers", 00:04:50.403 "bdev_nvme_attach_controller", 00:04:50.403 "bdev_nvme_set_hotplug", 00:04:50.403 "bdev_nvme_set_options", 00:04:50.403 "bdev_passthru_delete", 00:04:50.403 "bdev_passthru_create", 00:04:50.403 "bdev_lvol_set_parent_bdev", 00:04:50.403 "bdev_lvol_set_parent", 00:04:50.403 "bdev_lvol_check_shallow_copy", 00:04:50.403 "bdev_lvol_start_shallow_copy", 00:04:50.403 "bdev_lvol_grow_lvstore", 00:04:50.403 "bdev_lvol_get_lvols", 00:04:50.403 "bdev_lvol_get_lvstores", 00:04:50.403 "bdev_lvol_delete", 00:04:50.403 "bdev_lvol_set_read_only", 00:04:50.403 "bdev_lvol_resize", 00:04:50.403 "bdev_lvol_decouple_parent", 00:04:50.403 "bdev_lvol_inflate", 00:04:50.403 "bdev_lvol_rename", 00:04:50.403 "bdev_lvol_clone_bdev", 00:04:50.403 "bdev_lvol_clone", 00:04:50.403 "bdev_lvol_snapshot", 00:04:50.403 "bdev_lvol_create", 00:04:50.403 "bdev_lvol_delete_lvstore", 00:04:50.403 "bdev_lvol_rename_lvstore", 00:04:50.403 "bdev_lvol_create_lvstore", 00:04:50.403 "bdev_raid_set_options", 00:04:50.403 "bdev_raid_remove_base_bdev", 00:04:50.403 "bdev_raid_add_base_bdev", 00:04:50.403 "bdev_raid_delete", 00:04:50.403 "bdev_raid_create", 00:04:50.403 "bdev_raid_get_bdevs", 00:04:50.403 "bdev_error_inject_error", 00:04:50.403 "bdev_error_delete", 00:04:50.403 "bdev_error_create", 00:04:50.403 "bdev_split_delete", 00:04:50.403 "bdev_split_create", 00:04:50.403 "bdev_delay_delete", 00:04:50.403 "bdev_delay_create", 00:04:50.403 "bdev_delay_update_latency", 00:04:50.403 "bdev_zone_block_delete", 00:04:50.403 "bdev_zone_block_create", 00:04:50.403 "blobfs_create", 00:04:50.403 "blobfs_detect", 00:04:50.403 "blobfs_set_cache_size", 00:04:50.403 "bdev_xnvme_delete", 00:04:50.403 "bdev_xnvme_create", 00:04:50.403 "bdev_aio_delete", 00:04:50.403 "bdev_aio_rescan", 00:04:50.403 "bdev_aio_create", 00:04:50.403 "bdev_ftl_set_property", 00:04:50.403 "bdev_ftl_get_properties", 00:04:50.403 "bdev_ftl_get_stats", 00:04:50.403 "bdev_ftl_unmap", 00:04:50.403 "bdev_ftl_unload", 00:04:50.403 "bdev_ftl_delete", 00:04:50.403 "bdev_ftl_load", 00:04:50.403 "bdev_ftl_create", 00:04:50.403 "bdev_virtio_attach_controller", 00:04:50.403 "bdev_virtio_scsi_get_devices", 00:04:50.403 "bdev_virtio_detach_controller", 00:04:50.403 "bdev_virtio_blk_set_hotplug", 00:04:50.403 "bdev_iscsi_delete", 00:04:50.403 "bdev_iscsi_create", 00:04:50.403 "bdev_iscsi_set_options", 00:04:50.403 "accel_error_inject_error", 00:04:50.403 "ioat_scan_accel_module", 00:04:50.403 "dsa_scan_accel_module", 00:04:50.403 "iaa_scan_accel_module", 00:04:50.403 "keyring_file_remove_key", 00:04:50.403 "keyring_file_add_key", 00:04:50.403 "keyring_linux_set_options", 00:04:50.403 "fsdev_aio_delete", 00:04:50.403 "fsdev_aio_create", 00:04:50.403 "iscsi_get_histogram", 00:04:50.403 "iscsi_enable_histogram", 00:04:50.403 "iscsi_set_options", 00:04:50.403 "iscsi_get_auth_groups", 00:04:50.403 "iscsi_auth_group_remove_secret", 00:04:50.403 "iscsi_auth_group_add_secret", 00:04:50.403 "iscsi_delete_auth_group", 00:04:50.403 "iscsi_create_auth_group", 00:04:50.403 "iscsi_set_discovery_auth", 00:04:50.403 "iscsi_get_options", 00:04:50.403 "iscsi_target_node_request_logout", 00:04:50.403 "iscsi_target_node_set_redirect", 00:04:50.403 "iscsi_target_node_set_auth", 00:04:50.403 "iscsi_target_node_add_lun", 00:04:50.403 "iscsi_get_stats", 00:04:50.403 "iscsi_get_connections", 00:04:50.403 "iscsi_portal_group_set_auth", 00:04:50.403 "iscsi_start_portal_group", 00:04:50.403 "iscsi_delete_portal_group", 00:04:50.403 "iscsi_create_portal_group", 00:04:50.403 "iscsi_get_portal_groups", 00:04:50.403 "iscsi_delete_target_node", 00:04:50.403 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.403 "iscsi_target_node_add_pg_ig_maps", 00:04:50.403 "iscsi_create_target_node", 00:04:50.403 "iscsi_get_target_nodes", 00:04:50.403 "iscsi_delete_initiator_group", 00:04:50.403 "iscsi_initiator_group_remove_initiators", 00:04:50.403 "iscsi_initiator_group_add_initiators", 00:04:50.403 "iscsi_create_initiator_group", 00:04:50.403 "iscsi_get_initiator_groups", 00:04:50.403 "nvmf_set_crdt", 00:04:50.403 "nvmf_set_config", 00:04:50.403 "nvmf_set_max_subsystems", 00:04:50.403 "nvmf_stop_mdns_prr", 00:04:50.403 "nvmf_publish_mdns_prr", 00:04:50.403 "nvmf_subsystem_get_listeners", 00:04:50.403 "nvmf_subsystem_get_qpairs", 00:04:50.403 "nvmf_subsystem_get_controllers", 00:04:50.403 "nvmf_get_stats", 00:04:50.403 "nvmf_get_transports", 00:04:50.403 "nvmf_create_transport", 00:04:50.403 "nvmf_get_targets", 00:04:50.403 "nvmf_delete_target", 00:04:50.403 "nvmf_create_target", 00:04:50.403 "nvmf_subsystem_allow_any_host", 00:04:50.403 "nvmf_subsystem_set_keys", 00:04:50.403 "nvmf_subsystem_remove_host", 00:04:50.403 "nvmf_subsystem_add_host", 00:04:50.403 "nvmf_ns_remove_host", 00:04:50.403 "nvmf_ns_add_host", 00:04:50.403 "nvmf_subsystem_remove_ns", 00:04:50.403 "nvmf_subsystem_set_ns_ana_group", 00:04:50.403 "nvmf_subsystem_add_ns", 00:04:50.403 "nvmf_subsystem_listener_set_ana_state", 00:04:50.403 "nvmf_discovery_get_referrals", 00:04:50.403 "nvmf_discovery_remove_referral", 00:04:50.403 "nvmf_discovery_add_referral", 00:04:50.403 "nvmf_subsystem_remove_listener", 00:04:50.403 "nvmf_subsystem_add_listener", 00:04:50.403 "nvmf_delete_subsystem", 00:04:50.403 "nvmf_create_subsystem", 00:04:50.403 "nvmf_get_subsystems", 00:04:50.403 "env_dpdk_get_mem_stats", 00:04:50.403 "nbd_get_disks", 00:04:50.403 "nbd_stop_disk", 00:04:50.404 "nbd_start_disk", 00:04:50.404 "ublk_recover_disk", 00:04:50.404 "ublk_get_disks", 00:04:50.404 "ublk_stop_disk", 00:04:50.404 "ublk_start_disk", 00:04:50.404 "ublk_destroy_target", 00:04:50.404 "ublk_create_target", 00:04:50.404 "virtio_blk_create_transport", 00:04:50.404 "virtio_blk_get_transports", 00:04:50.404 "vhost_controller_set_coalescing", 00:04:50.404 "vhost_get_controllers", 00:04:50.404 "vhost_delete_controller", 00:04:50.404 "vhost_create_blk_controller", 00:04:50.404 "vhost_scsi_controller_remove_target", 00:04:50.404 "vhost_scsi_controller_add_target", 00:04:50.404 "vhost_start_scsi_controller", 00:04:50.404 "vhost_create_scsi_controller", 00:04:50.404 "thread_set_cpumask", 00:04:50.404 "scheduler_set_options", 00:04:50.404 "framework_get_governor", 00:04:50.404 "framework_get_scheduler", 00:04:50.404 "framework_set_scheduler", 00:04:50.404 "framework_get_reactors", 00:04:50.404 "thread_get_io_channels", 00:04:50.404 "thread_get_pollers", 00:04:50.404 "thread_get_stats", 00:04:50.404 "framework_monitor_context_switch", 00:04:50.404 "spdk_kill_instance", 00:04:50.404 "log_enable_timestamps", 00:04:50.404 "log_get_flags", 00:04:50.404 "log_clear_flag", 00:04:50.404 "log_set_flag", 00:04:50.404 "log_get_level", 00:04:50.404 "log_set_level", 00:04:50.404 "log_get_print_level", 00:04:50.404 "log_set_print_level", 00:04:50.404 "framework_enable_cpumask_locks", 00:04:50.404 "framework_disable_cpumask_locks", 00:04:50.404 "framework_wait_init", 00:04:50.404 "framework_start_init", 00:04:50.404 "scsi_get_devices", 00:04:50.404 "bdev_get_histogram", 00:04:50.404 "bdev_enable_histogram", 00:04:50.404 "bdev_set_qos_limit", 00:04:50.404 "bdev_set_qd_sampling_period", 00:04:50.404 "bdev_get_bdevs", 00:04:50.404 "bdev_reset_iostat", 00:04:50.404 "bdev_get_iostat", 00:04:50.404 "bdev_examine", 00:04:50.404 "bdev_wait_for_examine", 00:04:50.404 "bdev_set_options", 00:04:50.404 "accel_get_stats", 00:04:50.404 "accel_set_options", 00:04:50.404 "accel_set_driver", 00:04:50.404 "accel_crypto_key_destroy", 00:04:50.404 "accel_crypto_keys_get", 00:04:50.404 "accel_crypto_key_create", 00:04:50.404 "accel_assign_opc", 00:04:50.404 "accel_get_module_info", 00:04:50.404 "accel_get_opc_assignments", 00:04:50.404 "vmd_rescan", 00:04:50.404 "vmd_remove_device", 00:04:50.404 "vmd_enable", 00:04:50.404 "sock_get_default_impl", 00:04:50.404 "sock_set_default_impl", 00:04:50.404 "sock_impl_set_options", 00:04:50.404 "sock_impl_get_options", 00:04:50.404 "iobuf_get_stats", 00:04:50.404 "iobuf_set_options", 00:04:50.404 "keyring_get_keys", 00:04:50.404 "framework_get_pci_devices", 00:04:50.404 "framework_get_config", 00:04:50.404 "framework_get_subsystems", 00:04:50.404 "fsdev_set_opts", 00:04:50.404 "fsdev_get_opts", 00:04:50.404 "trace_get_info", 00:04:50.404 "trace_get_tpoint_group_mask", 00:04:50.404 "trace_disable_tpoint_group", 00:04:50.404 "trace_enable_tpoint_group", 00:04:50.404 "trace_clear_tpoint_mask", 00:04:50.404 "trace_set_tpoint_mask", 00:04:50.404 "notify_get_notifications", 00:04:50.404 "notify_get_types", 00:04:50.404 "spdk_get_version", 00:04:50.404 "rpc_get_methods" 00:04:50.404 ] 00:04:50.404 08:58:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.404 08:58:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.404 08:58:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57967 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57967 ']' 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57967 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57967 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57967' 00:04:50.404 killing process with pid 57967 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57967 00:04:50.404 08:58:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57967 00:04:51.812 00:04:51.812 real 0m2.502s 00:04:51.812 user 0m4.481s 00:04:51.812 sys 0m0.446s 00:04:51.812 ************************************ 00:04:51.812 END TEST spdkcli_tcp 00:04:51.812 08:58:30 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.812 08:58:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 ************************************ 00:04:51.812 08:58:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.812 08:58:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.812 08:58:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.812 08:58:30 -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 ************************************ 00:04:51.812 START TEST dpdk_mem_utility 00:04:51.812 ************************************ 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:51.812 * Looking for test storage... 00:04:51.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:51.812 08:58:30 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:51.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.812 --rc genhtml_branch_coverage=1 00:04:51.812 --rc genhtml_function_coverage=1 00:04:51.812 --rc genhtml_legend=1 00:04:51.812 --rc geninfo_all_blocks=1 00:04:51.812 --rc geninfo_unexecuted_blocks=1 00:04:51.812 00:04:51.812 ' 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:51.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.812 --rc genhtml_branch_coverage=1 00:04:51.812 --rc genhtml_function_coverage=1 00:04:51.812 --rc genhtml_legend=1 00:04:51.812 --rc geninfo_all_blocks=1 00:04:51.812 --rc geninfo_unexecuted_blocks=1 00:04:51.812 00:04:51.812 ' 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:51.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.812 --rc genhtml_branch_coverage=1 00:04:51.812 --rc genhtml_function_coverage=1 00:04:51.812 --rc genhtml_legend=1 00:04:51.812 --rc geninfo_all_blocks=1 00:04:51.812 --rc geninfo_unexecuted_blocks=1 00:04:51.812 00:04:51.812 ' 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:51.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.812 --rc genhtml_branch_coverage=1 00:04:51.812 --rc genhtml_function_coverage=1 00:04:51.812 --rc genhtml_legend=1 00:04:51.812 --rc geninfo_all_blocks=1 00:04:51.812 --rc geninfo_unexecuted_blocks=1 00:04:51.812 00:04:51.812 ' 00:04:51.812 08:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:51.812 08:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58072 00:04:51.812 08:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58072 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58072 ']' 00:04:51.812 08:58:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:51.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:51.812 08:58:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.812 [2024-11-20 08:58:30.635438] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:51.813 [2024-11-20 08:58:30.635599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58072 ] 00:04:52.071 [2024-11-20 08:58:30.780854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.071 [2024-11-20 08:58:30.858394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.639 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.639 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:52.639 08:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:52.639 08:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:52.639 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.639 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.639 { 00:04:52.639 "filename": "/tmp/spdk_mem_dump.txt" 00:04:52.639 } 00:04:52.639 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.639 08:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:52.639 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:52.639 1 heaps totaling size 816.000000 MiB 00:04:52.639 size: 816.000000 MiB heap id: 0 00:04:52.639 end heaps---------- 00:04:52.639 9 mempools totaling size 595.772034 MiB 00:04:52.639 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:52.639 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:52.639 size: 92.545471 MiB name: bdev_io_58072 00:04:52.639 size: 50.003479 MiB name: msgpool_58072 00:04:52.639 size: 36.509338 MiB name: fsdev_io_58072 00:04:52.639 size: 21.763794 MiB name: PDU_Pool 00:04:52.639 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:52.639 size: 4.133484 MiB name: evtpool_58072 00:04:52.639 size: 0.026123 MiB name: Session_Pool 00:04:52.639 end mempools------- 00:04:52.639 6 memzones totaling size 4.142822 MiB 00:04:52.639 size: 1.000366 MiB name: RG_ring_0_58072 00:04:52.639 size: 1.000366 MiB name: RG_ring_1_58072 00:04:52.639 size: 1.000366 MiB name: RG_ring_4_58072 00:04:52.639 size: 1.000366 MiB name: RG_ring_5_58072 00:04:52.639 size: 0.125366 MiB name: RG_ring_2_58072 00:04:52.639 size: 0.015991 MiB name: RG_ring_3_58072 00:04:52.639 end memzones------- 00:04:52.639 08:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.639 heap id: 0 total size: 816.000000 MiB number of busy elements: 314 number of free elements: 18 00:04:52.639 list of free elements. size: 16.791626 MiB 00:04:52.639 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:52.639 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:52.639 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:52.639 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:52.639 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:52.639 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:52.639 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:52.639 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:52.639 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:52.640 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:52.640 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:52.640 element at address: 0x20001ac00000 with size: 0.559021 MiB 00:04:52.640 element at address: 0x200000c00000 with size: 0.491638 MiB 00:04:52.640 element at address: 0x200018e00000 with size: 0.488464 MiB 00:04:52.640 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:52.640 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:52.640 element at address: 0x200028000000 with size: 0.391663 MiB 00:04:52.640 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:52.640 list of standard malloc elements. size: 199.287476 MiB 00:04:52.640 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:52.640 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:52.640 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:52.640 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:52.640 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:52.640 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:52.640 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:52.640 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:52.640 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:52.640 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:52.640 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:52.640 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:52.640 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:52.640 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:52.641 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:52.641 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:52.641 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:52.642 element at address: 0x200028064440 with size: 0.000244 MiB 00:04:52.642 element at address: 0x200028064540 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b200 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:52.642 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:52.642 list of memzone associated elements. size: 599.920898 MiB 00:04:52.642 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:52.642 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.642 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:52.642 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:52.642 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:52.642 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58072_0 00:04:52.642 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:52.642 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58072_0 00:04:52.642 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:52.642 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58072_0 00:04:52.642 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:52.642 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:52.642 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:52.642 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.642 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:52.642 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58072_0 00:04:52.642 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:52.642 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58072 00:04:52.642 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:52.642 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58072 00:04:52.642 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:52.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.642 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:52.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.642 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:52.642 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.642 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:52.642 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.642 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:52.642 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58072 00:04:52.642 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:52.642 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58072 00:04:52.642 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:52.642 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58072 00:04:52.642 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:52.642 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58072 00:04:52.642 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:52.642 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58072 00:04:52.643 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:52.643 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58072 00:04:52.643 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:52.643 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.643 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:52.643 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.643 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:52.643 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.643 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:52.643 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58072 00:04:52.643 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:52.643 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58072 00:04:52.643 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:52.643 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.643 element at address: 0x200028064640 with size: 0.023804 MiB 00:04:52.643 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.643 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:52.643 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58072 00:04:52.643 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:04:52.643 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.643 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:52.643 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58072 00:04:52.643 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:52.643 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58072 00:04:52.643 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:52.643 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58072 00:04:52.643 element at address: 0x20002806b300 with size: 0.000366 MiB 00:04:52.643 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.643 08:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.643 08:58:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58072 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58072 ']' 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58072 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58072 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.643 killing process with pid 58072 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58072' 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58072 00:04:52.643 08:58:31 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58072 00:04:54.018 00:04:54.018 real 0m2.324s 00:04:54.018 user 0m2.331s 00:04:54.018 sys 0m0.343s 00:04:54.018 08:58:32 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.018 ************************************ 00:04:54.018 END TEST dpdk_mem_utility 00:04:54.018 08:58:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.018 ************************************ 00:04:54.018 08:58:32 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:54.018 08:58:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.018 08:58:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.018 08:58:32 -- common/autotest_common.sh@10 -- # set +x 00:04:54.018 ************************************ 00:04:54.018 START TEST event 00:04:54.018 ************************************ 00:04:54.018 08:58:32 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:54.018 * Looking for test storage... 00:04:54.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:54.018 08:58:32 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:54.018 08:58:32 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:54.018 08:58:32 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:54.018 08:58:32 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:54.018 08:58:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.018 08:58:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.018 08:58:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.018 08:58:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.018 08:58:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.018 08:58:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.018 08:58:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.018 08:58:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.018 08:58:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.018 08:58:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.018 08:58:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.018 08:58:32 event -- scripts/common.sh@344 -- # case "$op" in 00:04:54.018 08:58:32 event -- scripts/common.sh@345 -- # : 1 00:04:54.018 08:58:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.018 08:58:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.018 08:58:32 event -- scripts/common.sh@365 -- # decimal 1 00:04:54.018 08:58:32 event -- scripts/common.sh@353 -- # local d=1 00:04:54.018 08:58:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.018 08:58:32 event -- scripts/common.sh@355 -- # echo 1 00:04:54.018 08:58:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.018 08:58:32 event -- scripts/common.sh@366 -- # decimal 2 00:04:54.019 08:58:32 event -- scripts/common.sh@353 -- # local d=2 00:04:54.019 08:58:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.019 08:58:32 event -- scripts/common.sh@355 -- # echo 2 00:04:54.019 08:58:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.019 08:58:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.019 08:58:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.019 08:58:32 event -- scripts/common.sh@368 -- # return 0 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.019 --rc genhtml_branch_coverage=1 00:04:54.019 --rc genhtml_function_coverage=1 00:04:54.019 --rc genhtml_legend=1 00:04:54.019 --rc geninfo_all_blocks=1 00:04:54.019 --rc geninfo_unexecuted_blocks=1 00:04:54.019 00:04:54.019 ' 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.019 --rc genhtml_branch_coverage=1 00:04:54.019 --rc genhtml_function_coverage=1 00:04:54.019 --rc genhtml_legend=1 00:04:54.019 --rc geninfo_all_blocks=1 00:04:54.019 --rc geninfo_unexecuted_blocks=1 00:04:54.019 00:04:54.019 ' 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.019 --rc genhtml_branch_coverage=1 00:04:54.019 --rc genhtml_function_coverage=1 00:04:54.019 --rc genhtml_legend=1 00:04:54.019 --rc geninfo_all_blocks=1 00:04:54.019 --rc geninfo_unexecuted_blocks=1 00:04:54.019 00:04:54.019 ' 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:54.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.019 --rc genhtml_branch_coverage=1 00:04:54.019 --rc genhtml_function_coverage=1 00:04:54.019 --rc genhtml_legend=1 00:04:54.019 --rc geninfo_all_blocks=1 00:04:54.019 --rc geninfo_unexecuted_blocks=1 00:04:54.019 00:04:54.019 ' 00:04:54.019 08:58:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:54.019 08:58:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:54.019 08:58:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:54.019 08:58:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.019 08:58:32 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.278 ************************************ 00:04:54.278 START TEST event_perf 00:04:54.278 ************************************ 00:04:54.278 08:58:32 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.278 Running I/O for 1 seconds...[2024-11-20 08:58:32.967001] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:54.278 [2024-11-20 08:58:32.967106] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58164 ] 00:04:54.278 [2024-11-20 08:58:33.124937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.537 [2024-11-20 08:58:33.204967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.537 [2024-11-20 08:58:33.205455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.537 [2024-11-20 08:58:33.205632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.537 [2024-11-20 08:58:33.205662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.479 Running I/O for 1 seconds... 00:04:55.479 lcore 0: 195545 00:04:55.479 lcore 1: 195546 00:04:55.479 lcore 2: 195546 00:04:55.479 lcore 3: 195542 00:04:55.479 done. 00:04:55.479 00:04:55.479 real 0m1.394s 00:04:55.479 user 0m4.193s 00:04:55.479 sys 0m0.077s 00:04:55.479 08:58:34 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.479 ************************************ 00:04:55.479 END TEST event_perf 00:04:55.479 ************************************ 00:04:55.479 08:58:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.479 08:58:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.479 08:58:34 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.479 08:58:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.479 08:58:34 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.479 ************************************ 00:04:55.479 START TEST event_reactor 00:04:55.479 ************************************ 00:04:55.479 08:58:34 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.737 [2024-11-20 08:58:34.405784] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:55.737 [2024-11-20 08:58:34.405868] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58204 ] 00:04:55.737 [2024-11-20 08:58:34.549340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.737 [2024-11-20 08:58:34.631190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.112 test_start 00:04:57.112 oneshot 00:04:57.112 tick 100 00:04:57.112 tick 100 00:04:57.112 tick 250 00:04:57.112 tick 100 00:04:57.112 tick 100 00:04:57.112 tick 100 00:04:57.112 tick 250 00:04:57.112 tick 500 00:04:57.112 tick 100 00:04:57.112 tick 100 00:04:57.112 tick 250 00:04:57.112 tick 100 00:04:57.112 tick 100 00:04:57.112 test_end 00:04:57.112 00:04:57.112 real 0m1.373s 00:04:57.112 user 0m1.214s 00:04:57.112 sys 0m0.052s 00:04:57.112 08:58:35 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.112 ************************************ 00:04:57.112 END TEST event_reactor 00:04:57.112 ************************************ 00:04:57.112 08:58:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:57.112 08:58:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.112 08:58:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:57.112 08:58:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.112 08:58:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.112 ************************************ 00:04:57.112 START TEST event_reactor_perf 00:04:57.112 ************************************ 00:04:57.112 08:58:35 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.112 [2024-11-20 08:58:35.844026] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:57.112 [2024-11-20 08:58:35.844133] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58241 ] 00:04:57.112 [2024-11-20 08:58:35.999169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.371 [2024-11-20 08:58:36.077120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.307 test_start 00:04:58.307 test_end 00:04:58.307 Performance: 403157 events per second 00:04:58.307 00:04:58.307 real 0m1.387s 00:04:58.307 user 0m1.217s 00:04:58.307 sys 0m0.062s 00:04:58.307 ************************************ 00:04:58.307 END TEST event_reactor_perf 00:04:58.307 08:58:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.307 08:58:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:58.307 ************************************ 00:04:58.567 08:58:37 event -- event/event.sh@49 -- # uname -s 00:04:58.567 08:58:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.567 08:58:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.567 08:58:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.567 08:58:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.567 08:58:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:58.567 ************************************ 00:04:58.567 START TEST event_scheduler 00:04:58.567 ************************************ 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:58.567 * Looking for test storage... 00:04:58.567 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.567 08:58:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.567 --rc genhtml_branch_coverage=1 00:04:58.567 --rc genhtml_function_coverage=1 00:04:58.567 --rc genhtml_legend=1 00:04:58.567 --rc geninfo_all_blocks=1 00:04:58.567 --rc geninfo_unexecuted_blocks=1 00:04:58.567 00:04:58.567 ' 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.567 --rc genhtml_branch_coverage=1 00:04:58.567 --rc genhtml_function_coverage=1 00:04:58.567 --rc genhtml_legend=1 00:04:58.567 --rc geninfo_all_blocks=1 00:04:58.567 --rc geninfo_unexecuted_blocks=1 00:04:58.567 00:04:58.567 ' 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.567 --rc genhtml_branch_coverage=1 00:04:58.567 --rc genhtml_function_coverage=1 00:04:58.567 --rc genhtml_legend=1 00:04:58.567 --rc geninfo_all_blocks=1 00:04:58.567 --rc geninfo_unexecuted_blocks=1 00:04:58.567 00:04:58.567 ' 00:04:58.567 08:58:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.567 --rc genhtml_branch_coverage=1 00:04:58.567 --rc genhtml_function_coverage=1 00:04:58.567 --rc genhtml_legend=1 00:04:58.567 --rc geninfo_all_blocks=1 00:04:58.567 --rc geninfo_unexecuted_blocks=1 00:04:58.567 00:04:58.567 ' 00:04:58.567 08:58:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.567 08:58:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58306 00:04:58.567 08:58:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.567 08:58:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58306 00:04:58.568 08:58:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58306 ']' 00:04:58.568 08:58:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.568 08:58:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.568 08:58:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.568 08:58:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.568 08:58:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 08:58:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.568 [2024-11-20 08:58:37.459161] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:04:58.568 [2024-11-20 08:58:37.459253] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58306 ] 00:04:58.827 [2024-11-20 08:58:37.613089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.827 [2024-11-20 08:58:37.717505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.827 [2024-11-20 08:58:37.717980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.827 [2024-11-20 08:58:37.718558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.827 [2024-11-20 08:58:37.718764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:59.763 08:58:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.763 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.763 POWER: Cannot set governor of lcore 0 to performance 00:04:59.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.763 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.763 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.763 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.763 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:59.763 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:59.763 POWER: Unable to set Power Management Environment for lcore 0 00:04:59.763 [2024-11-20 08:58:38.324636] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:59.763 [2024-11-20 08:58:38.324691] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:59.763 [2024-11-20 08:58:38.324795] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:59.763 [2024-11-20 08:58:38.324864] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.763 [2024-11-20 08:58:38.324922] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.763 [2024-11-20 08:58:38.324970] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 [2024-11-20 08:58:38.544919] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 ************************************ 00:04:59.763 START TEST scheduler_create_thread 00:04:59.763 ************************************ 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 2 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 3 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 4 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 5 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 6 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 7 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 8 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 9 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 10 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:59.763 08:58:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.145 08:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.145 08:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.145 08:58:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.145 08:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.145 08:58:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.084 ************************************ 00:05:02.084 END TEST scheduler_create_thread 00:05:02.084 ************************************ 00:05:02.084 08:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.084 00:05:02.084 real 0m2.132s 00:05:02.084 user 0m0.017s 00:05:02.084 sys 0m0.004s 00:05:02.084 08:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.084 08:58:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.084 08:58:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:02.084 08:58:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58306 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58306 ']' 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58306 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58306 00:05:02.084 killing process with pid 58306 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58306' 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58306 00:05:02.084 08:58:40 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58306 00:05:02.345 [2024-11-20 08:58:41.172101] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.307 00:05:03.307 real 0m4.673s 00:05:03.307 user 0m8.093s 00:05:03.307 sys 0m0.336s 00:05:03.307 08:58:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.307 08:58:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.307 ************************************ 00:05:03.307 END TEST event_scheduler 00:05:03.307 ************************************ 00:05:03.307 08:58:41 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.307 08:58:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.307 08:58:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.307 08:58:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.307 08:58:41 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.307 ************************************ 00:05:03.307 START TEST app_repeat 00:05:03.307 ************************************ 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:03.307 Process app_repeat pid: 58412 00:05:03.307 spdk_app_start Round 0 00:05:03.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58412 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58412' 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.307 08:58:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58412 ']' 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.307 08:58:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.307 [2024-11-20 08:58:42.050932] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:03.307 [2024-11-20 08:58:42.051040] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58412 ] 00:05:03.307 [2024-11-20 08:58:42.208295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.568 [2024-11-20 08:58:42.311881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.568 [2024-11-20 08:58:42.311914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.139 08:58:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.139 08:58:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.139 08:58:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.427 Malloc0 00:05:04.427 08:58:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.687 Malloc1 00:05:04.687 08:58:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.687 08:58:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.944 /dev/nbd0 00:05:04.944 08:58:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.944 08:58:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.944 1+0 records in 00:05:04.944 1+0 records out 00:05:04.944 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796887 s, 5.1 MB/s 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.944 08:58:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.944 08:58:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.944 08:58:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.944 08:58:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.944 /dev/nbd1 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.201 1+0 records in 00:05:05.201 1+0 records out 00:05:05.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385265 s, 10.6 MB/s 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:05.201 08:58:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.201 08:58:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.201 08:58:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.201 { 00:05:05.201 "nbd_device": "/dev/nbd0", 00:05:05.201 "bdev_name": "Malloc0" 00:05:05.202 }, 00:05:05.202 { 00:05:05.202 "nbd_device": "/dev/nbd1", 00:05:05.202 "bdev_name": "Malloc1" 00:05:05.202 } 00:05:05.202 ]' 00:05:05.202 08:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.202 { 00:05:05.202 "nbd_device": "/dev/nbd0", 00:05:05.202 "bdev_name": "Malloc0" 00:05:05.202 }, 00:05:05.202 { 00:05:05.202 "nbd_device": "/dev/nbd1", 00:05:05.202 "bdev_name": "Malloc1" 00:05:05.202 } 00:05:05.202 ]' 00:05:05.202 08:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.459 /dev/nbd1' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.459 /dev/nbd1' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.459 256+0 records in 00:05:05.459 256+0 records out 00:05:05.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00677088 s, 155 MB/s 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.459 256+0 records in 00:05:05.459 256+0 records out 00:05:05.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316705 s, 33.1 MB/s 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.459 256+0 records in 00:05:05.459 256+0 records out 00:05:05.459 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222363 s, 47.2 MB/s 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.459 08:58:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.717 08:58:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.974 08:58:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.974 08:58:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.232 08:58:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:07.166 [2024-11-20 08:58:45.841488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.166 [2024-11-20 08:58:45.930431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.166 [2024-11-20 08:58:45.930434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.166 [2024-11-20 08:58:46.054605] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.166 [2024-11-20 08:58:46.054663] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.696 spdk_app_start Round 1 00:05:09.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.696 08:58:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.696 08:58:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.696 08:58:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58412 ']' 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.696 08:58:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.696 08:58:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.696 Malloc0 00:05:09.696 08:58:48 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.954 Malloc1 00:05:09.954 08:58:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.954 08:58:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.212 /dev/nbd0 00:05:10.212 08:58:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.212 08:58:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.212 1+0 records in 00:05:10.212 1+0 records out 00:05:10.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041096 s, 10.0 MB/s 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.212 08:58:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.213 08:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.213 08:58:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.213 08:58:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.474 /dev/nbd1 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.474 1+0 records in 00:05:10.474 1+0 records out 00:05:10.474 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016577 s, 24.7 MB/s 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:10.474 08:58:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.474 08:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.736 { 00:05:10.736 "nbd_device": "/dev/nbd0", 00:05:10.736 "bdev_name": "Malloc0" 00:05:10.736 }, 00:05:10.736 { 00:05:10.736 "nbd_device": "/dev/nbd1", 00:05:10.736 "bdev_name": "Malloc1" 00:05:10.736 } 00:05:10.736 ]' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.736 { 00:05:10.736 "nbd_device": "/dev/nbd0", 00:05:10.736 "bdev_name": "Malloc0" 00:05:10.736 }, 00:05:10.736 { 00:05:10.736 "nbd_device": "/dev/nbd1", 00:05:10.736 "bdev_name": "Malloc1" 00:05:10.736 } 00:05:10.736 ]' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.736 /dev/nbd1' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.736 /dev/nbd1' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.736 256+0 records in 00:05:10.736 256+0 records out 00:05:10.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00488172 s, 215 MB/s 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.736 256+0 records in 00:05:10.736 256+0 records out 00:05:10.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017483 s, 60.0 MB/s 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.736 256+0 records in 00:05:10.736 256+0 records out 00:05:10.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176665 s, 59.4 MB/s 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.736 08:58:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.994 08:58:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.251 08:58:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.251 08:58:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.251 08:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.251 08:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.509 08:58:50 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.509 08:58:50 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.768 08:58:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.334 [2024-11-20 08:58:51.045624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.334 [2024-11-20 08:58:51.118517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.334 [2024-11-20 08:58:51.118523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.334 [2024-11-20 08:58:51.217858] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.334 [2024-11-20 08:58:51.217913] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.861 spdk_app_start Round 2 00:05:14.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.861 08:58:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.861 08:58:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.861 08:58:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58412 ']' 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.861 08:58:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:14.861 08:58:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.120 Malloc0 00:05:15.120 08:58:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:15.378 Malloc1 00:05:15.378 08:58:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.378 08:58:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.637 /dev/nbd0 00:05:15.637 08:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.637 08:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.637 1+0 records in 00:05:15.637 1+0 records out 00:05:15.637 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000130362 s, 31.4 MB/s 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.637 08:58:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.637 08:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.637 08:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.637 08:58:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.895 /dev/nbd1 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.895 1+0 records in 00:05:15.895 1+0 records out 00:05:15.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216228 s, 18.9 MB/s 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:15.895 08:58:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.895 08:58:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:16.154 { 00:05:16.154 "nbd_device": "/dev/nbd0", 00:05:16.154 "bdev_name": "Malloc0" 00:05:16.154 }, 00:05:16.154 { 00:05:16.154 "nbd_device": "/dev/nbd1", 00:05:16.154 "bdev_name": "Malloc1" 00:05:16.154 } 00:05:16.154 ]' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:16.154 { 00:05:16.154 "nbd_device": "/dev/nbd0", 00:05:16.154 "bdev_name": "Malloc0" 00:05:16.154 }, 00:05:16.154 { 00:05:16.154 "nbd_device": "/dev/nbd1", 00:05:16.154 "bdev_name": "Malloc1" 00:05:16.154 } 00:05:16.154 ]' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:16.154 /dev/nbd1' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:16.154 /dev/nbd1' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:16.154 256+0 records in 00:05:16.154 256+0 records out 00:05:16.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596858 s, 176 MB/s 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:16.154 256+0 records in 00:05:16.154 256+0 records out 00:05:16.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162535 s, 64.5 MB/s 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:16.154 256+0 records in 00:05:16.154 256+0 records out 00:05:16.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191315 s, 54.8 MB/s 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.154 08:58:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:16.412 08:58:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.679 08:58:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.680 08:58:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.680 08:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.680 08:58:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.680 08:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.680 08:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.938 08:58:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.938 08:58:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:17.196 08:58:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:17.761 [2024-11-20 08:58:56.477775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.761 [2024-11-20 08:58:56.548084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.761 [2024-11-20 08:58:56.548242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.762 [2024-11-20 08:58:56.647572] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:17.762 [2024-11-20 08:58:56.647759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.288 08:58:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58412 /var/tmp/spdk-nbd.sock 00:05:20.288 08:58:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58412 ']' 00:05:20.288 08:58:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.288 08:58:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.288 08:58:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.288 08:58:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.288 08:58:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:20.288 08:58:59 event.app_repeat -- event/event.sh@39 -- # killprocess 58412 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58412 ']' 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58412 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58412 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.288 killing process with pid 58412 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58412' 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58412 00:05:20.288 08:58:59 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58412 00:05:20.856 spdk_app_start is called in Round 0. 00:05:20.856 Shutdown signal received, stop current app iteration 00:05:20.856 Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 reinitialization... 00:05:20.856 spdk_app_start is called in Round 1. 00:05:20.856 Shutdown signal received, stop current app iteration 00:05:20.856 Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 reinitialization... 00:05:20.856 spdk_app_start is called in Round 2. 00:05:20.856 Shutdown signal received, stop current app iteration 00:05:20.856 Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 reinitialization... 00:05:20.856 spdk_app_start is called in Round 3. 00:05:20.856 Shutdown signal received, stop current app iteration 00:05:20.856 08:58:59 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:20.856 08:58:59 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:20.856 00:05:20.856 real 0m17.662s 00:05:20.856 user 0m38.532s 00:05:20.856 sys 0m2.123s 00:05:20.856 ************************************ 00:05:20.856 END TEST app_repeat 00:05:20.856 ************************************ 00:05:20.856 08:58:59 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.856 08:58:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.856 08:58:59 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:20.856 08:58:59 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:20.856 08:58:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.856 08:58:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.856 08:58:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.856 ************************************ 00:05:20.856 START TEST cpu_locks 00:05:20.856 ************************************ 00:05:20.856 08:58:59 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:21.116 * Looking for test storage... 00:05:21.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:21.116 08:58:59 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.116 08:58:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.116 08:58:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.116 08:58:59 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:21.116 08:58:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.117 08:58:59 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.117 --rc genhtml_branch_coverage=1 00:05:21.117 --rc genhtml_function_coverage=1 00:05:21.117 --rc genhtml_legend=1 00:05:21.117 --rc geninfo_all_blocks=1 00:05:21.117 --rc geninfo_unexecuted_blocks=1 00:05:21.117 00:05:21.117 ' 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.117 --rc genhtml_branch_coverage=1 00:05:21.117 --rc genhtml_function_coverage=1 00:05:21.117 --rc genhtml_legend=1 00:05:21.117 --rc geninfo_all_blocks=1 00:05:21.117 --rc geninfo_unexecuted_blocks=1 00:05:21.117 00:05:21.117 ' 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.117 --rc genhtml_branch_coverage=1 00:05:21.117 --rc genhtml_function_coverage=1 00:05:21.117 --rc genhtml_legend=1 00:05:21.117 --rc geninfo_all_blocks=1 00:05:21.117 --rc geninfo_unexecuted_blocks=1 00:05:21.117 00:05:21.117 ' 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.117 --rc genhtml_branch_coverage=1 00:05:21.117 --rc genhtml_function_coverage=1 00:05:21.117 --rc genhtml_legend=1 00:05:21.117 --rc geninfo_all_blocks=1 00:05:21.117 --rc geninfo_unexecuted_blocks=1 00:05:21.117 00:05:21.117 ' 00:05:21.117 08:58:59 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:21.117 08:58:59 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:21.117 08:58:59 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:21.117 08:58:59 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.117 08:58:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.117 ************************************ 00:05:21.117 START TEST default_locks 00:05:21.117 ************************************ 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58837 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58837 00:05:21.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58837 ']' 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.117 08:58:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.117 [2024-11-20 08:58:59.936147] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:21.117 [2024-11-20 08:58:59.936239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58837 ] 00:05:21.376 [2024-11-20 08:59:00.087403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.376 [2024-11-20 08:59:00.169200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.942 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.942 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:21.942 08:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58837 00:05:21.942 08:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:21.942 08:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58837 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58837 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58837 ']' 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58837 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58837 00:05:22.200 killing process with pid 58837 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58837' 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58837 00:05:22.200 08:59:00 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58837 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58837 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58837 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58837 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58837 ']' 00:05:23.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.571 ERROR: process (pid: 58837) is no longer running 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58837) - No such process 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:23.571 00:05:23.571 real 0m2.236s 00:05:23.571 user 0m2.225s 00:05:23.571 sys 0m0.390s 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.571 08:59:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 ************************************ 00:05:23.571 END TEST default_locks 00:05:23.571 ************************************ 00:05:23.571 08:59:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:23.571 08:59:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.571 08:59:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.571 08:59:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 ************************************ 00:05:23.571 START TEST default_locks_via_rpc 00:05:23.571 ************************************ 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58890 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58890 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58890 ']' 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.571 08:59:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.571 [2024-11-20 08:59:02.241614] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:23.571 [2024-11-20 08:59:02.241843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58890 ] 00:05:23.571 [2024-11-20 08:59:02.397740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.571 [2024-11-20 08:59:02.481241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58890 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58890 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58890 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58890 ']' 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58890 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58890 00:05:24.512 killing process with pid 58890 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58890' 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58890 00:05:24.512 08:59:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58890 00:05:25.888 ************************************ 00:05:25.888 END TEST default_locks_via_rpc 00:05:25.888 ************************************ 00:05:25.888 00:05:25.888 real 0m2.332s 00:05:25.888 user 0m2.354s 00:05:25.888 sys 0m0.408s 00:05:25.888 08:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.888 08:59:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.888 08:59:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:25.888 08:59:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.888 08:59:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.888 08:59:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.888 ************************************ 00:05:25.888 START TEST non_locking_app_on_locked_coremask 00:05:25.888 ************************************ 00:05:25.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58953 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58953 /var/tmp/spdk.sock 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58953 ']' 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.888 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.889 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.889 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.889 08:59:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.889 [2024-11-20 08:59:04.640944] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:25.889 [2024-11-20 08:59:04.641063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:05:25.889 [2024-11-20 08:59:04.803894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.150 [2024-11-20 08:59:04.909572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58969 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58969 /var/tmp/spdk2.sock 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58969 ']' 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.722 08:59:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.722 [2024-11-20 08:59:05.613889] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:26.722 [2024-11-20 08:59:05.614035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58969 ] 00:05:26.982 [2024-11-20 08:59:05.797230] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:26.982 [2024-11-20 08:59:05.797284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.248 [2024-11-20 08:59:06.004272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.634 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.634 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:28.634 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58953 00:05:28.634 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58953 00:05:28.634 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.895 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58953 00:05:28.895 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58953 ']' 00:05:28.895 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58953 00:05:28.895 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.895 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.895 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58953 00:05:28.895 killing process with pid 58953 00:05:28.896 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.896 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.896 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58953' 00:05:28.896 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58953 00:05:28.896 08:59:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58953 00:05:32.201 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58969 00:05:32.201 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58969 ']' 00:05:32.201 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58969 00:05:32.201 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:32.201 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.202 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58969 00:05:32.202 killing process with pid 58969 00:05:32.202 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.202 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.202 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58969' 00:05:32.202 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58969 00:05:32.202 08:59:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58969 00:05:33.589 ************************************ 00:05:33.589 END TEST non_locking_app_on_locked_coremask 00:05:33.589 ************************************ 00:05:33.589 00:05:33.589 real 0m7.856s 00:05:33.589 user 0m8.098s 00:05:33.589 sys 0m0.931s 00:05:33.589 08:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.589 08:59:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.589 08:59:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:33.589 08:59:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.589 08:59:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.589 08:59:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.589 ************************************ 00:05:33.589 START TEST locking_app_on_unlocked_coremask 00:05:33.589 ************************************ 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59077 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59077 /var/tmp/spdk.sock 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59077 ']' 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.589 08:59:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.852 [2024-11-20 08:59:12.581512] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:33.852 [2024-11-20 08:59:12.581670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59077 ] 00:05:33.852 [2024-11-20 08:59:12.745421] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.852 [2024-11-20 08:59:12.745488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.113 [2024-11-20 08:59:12.885243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59093 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59093 /var/tmp/spdk2.sock 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59093 ']' 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.058 08:59:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.058 [2024-11-20 08:59:13.694707] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:35.058 [2024-11-20 08:59:13.694844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59093 ] 00:05:35.058 [2024-11-20 08:59:13.876256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.319 [2024-11-20 08:59:14.150332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.706 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.706 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:36.706 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59093 00:05:36.706 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59093 00:05:36.706 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.970 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59077 00:05:36.970 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59077 ']' 00:05:36.970 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59077 00:05:36.970 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59077 00:05:36.971 killing process with pid 59077 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59077' 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59077 00:05:36.971 08:59:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59077 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59093 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59093 ']' 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59093 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59093 00:05:40.295 killing process with pid 59093 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59093' 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59093 00:05:40.295 08:59:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59093 00:05:42.210 ************************************ 00:05:42.210 END TEST locking_app_on_unlocked_coremask 00:05:42.210 ************************************ 00:05:42.210 00:05:42.210 real 0m8.307s 00:05:42.210 user 0m8.398s 00:05:42.210 sys 0m1.061s 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.210 08:59:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:42.210 08:59:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.210 08:59:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.210 08:59:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.210 ************************************ 00:05:42.210 START TEST locking_app_on_locked_coremask 00:05:42.210 ************************************ 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59206 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59206 /var/tmp/spdk.sock 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59206 ']' 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.210 08:59:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:42.210 [2024-11-20 08:59:20.954115] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:42.210 [2024-11-20 08:59:20.954258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:05:42.210 [2024-11-20 08:59:21.113021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.472 [2024-11-20 08:59:21.254777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59222 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59222 /var/tmp/spdk2.sock 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59222 /var/tmp/spdk2.sock 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59222 /var/tmp/spdk2.sock 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59222 ']' 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.418 08:59:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.418 [2024-11-20 08:59:22.072652] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:43.418 [2024-11-20 08:59:22.073027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:05:43.418 [2024-11-20 08:59:22.256290] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59206 has claimed it. 00:05:43.418 [2024-11-20 08:59:22.256389] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:43.994 ERROR: process (pid: 59222) is no longer running 00:05:43.994 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59222) - No such process 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59206 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59206 00:05:43.994 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.255 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59206 00:05:44.255 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59206 ']' 00:05:44.255 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59206 00:05:44.255 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.255 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.255 08:59:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59206 00:05:44.255 killing process with pid 59206 00:05:44.255 08:59:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.255 08:59:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.255 08:59:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59206' 00:05:44.255 08:59:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59206 00:05:44.255 08:59:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59206 00:05:46.169 ************************************ 00:05:46.169 END TEST locking_app_on_locked_coremask 00:05:46.169 ************************************ 00:05:46.169 00:05:46.169 real 0m3.842s 00:05:46.169 user 0m3.989s 00:05:46.169 sys 0m0.740s 00:05:46.169 08:59:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.169 08:59:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.169 08:59:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.169 08:59:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.169 08:59:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.169 08:59:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.169 ************************************ 00:05:46.169 START TEST locking_overlapped_coremask 00:05:46.169 ************************************ 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59286 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59286 /var/tmp/spdk.sock 00:05:46.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59286 ']' 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.169 08:59:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.169 [2024-11-20 08:59:24.869185] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:46.169 [2024-11-20 08:59:24.869564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59286 ] 00:05:46.169 [2024-11-20 08:59:25.036402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.430 [2024-11-20 08:59:25.174361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.430 [2024-11-20 08:59:25.174723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.430 [2024-11-20 08:59:25.174724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59304 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59304 /var/tmp/spdk2.sock 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59304 /var/tmp/spdk2.sock 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59304 /var/tmp/spdk2.sock 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59304 ']' 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.071 08:59:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.071 [2024-11-20 08:59:25.982061] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:47.072 [2024-11-20 08:59:25.982215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:05:47.334 [2024-11-20 08:59:26.163754] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59286 has claimed it. 00:05:47.334 [2024-11-20 08:59:26.167912] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.910 ERROR: process (pid: 59304) is no longer running 00:05:47.910 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59304) - No such process 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59286 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59286 ']' 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59286 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59286 00:05:47.910 killing process with pid 59286 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59286' 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59286 00:05:47.910 08:59:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59286 00:05:49.826 00:05:49.826 real 0m3.613s 00:05:49.826 user 0m9.658s 00:05:49.826 sys 0m0.588s 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.826 ************************************ 00:05:49.826 END TEST locking_overlapped_coremask 00:05:49.826 ************************************ 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 08:59:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:49.826 08:59:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.826 08:59:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.826 08:59:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 ************************************ 00:05:49.826 START TEST locking_overlapped_coremask_via_rpc 00:05:49.826 ************************************ 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:49.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59362 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59362 ']' 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:49.826 08:59:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.826 [2024-11-20 08:59:28.552427] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:49.826 [2024-11-20 08:59:28.552593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59362 ] 00:05:49.826 [2024-11-20 08:59:28.720004] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.826 [2024-11-20 08:59:28.720088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:50.088 [2024-11-20 08:59:28.864425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.088 [2024-11-20 08:59:28.865029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.088 [2024-11-20 08:59:28.865227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59380 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59380 ']' 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.034 08:59:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.034 [2024-11-20 08:59:29.680638] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:51.034 [2024-11-20 08:59:29.681052] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59380 ] 00:05:51.034 [2024-11-20 08:59:29.866433] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.034 [2024-11-20 08:59:29.867888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:51.295 [2024-11-20 08:59:30.158535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.295 [2024-11-20 08:59:30.161112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.295 [2024-11-20 08:59:30.161131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.841 [2024-11-20 08:59:32.305145] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59362 has claimed it. 00:05:53.841 request: 00:05:53.841 { 00:05:53.841 "method": "framework_enable_cpumask_locks", 00:05:53.841 "req_id": 1 00:05:53.841 } 00:05:53.841 Got JSON-RPC error response 00:05:53.841 response: 00:05:53.841 { 00:05:53.841 "code": -32603, 00:05:53.841 "message": "Failed to claim CPU core: 2" 00:05:53.841 } 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59362 ']' 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59380 ']' 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:53.841 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:54.104 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:54.104 00:05:54.104 real 0m4.298s 00:05:54.104 user 0m1.382s 00:05:54.104 sys 0m0.178s 00:05:54.104 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.104 ************************************ 00:05:54.104 08:59:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.104 END TEST locking_overlapped_coremask_via_rpc 00:05:54.104 ************************************ 00:05:54.104 08:59:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:54.104 08:59:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59362 ]] 00:05:54.104 08:59:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59362 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59362 ']' 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59362 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59362 00:05:54.104 killing process with pid 59362 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59362' 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59362 00:05:54.104 08:59:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59362 00:05:56.018 08:59:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59380 ]] 00:05:56.018 08:59:34 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59380 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59380 ']' 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59380 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59380 00:05:56.018 killing process with pid 59380 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59380' 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59380 00:05:56.018 08:59:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59380 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59362 ]] 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59362 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59362 ']' 00:05:57.443 Process with pid 59362 is not found 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59362 00:05:57.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59362) - No such process 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59362 is not found' 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59380 ]] 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59380 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59380 ']' 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59380 00:05:57.443 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59380) - No such process 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59380 is not found' 00:05:57.443 Process with pid 59380 is not found 00:05:57.443 08:59:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:57.443 ************************************ 00:05:57.443 END TEST cpu_locks 00:05:57.443 ************************************ 00:05:57.443 00:05:57.443 real 0m36.560s 00:05:57.443 user 1m6.028s 00:05:57.443 sys 0m5.402s 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.443 08:59:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.443 ************************************ 00:05:57.443 END TEST event 00:05:57.443 ************************************ 00:05:57.443 00:05:57.443 real 1m3.557s 00:05:57.443 user 1m59.458s 00:05:57.443 sys 0m8.278s 00:05:57.443 08:59:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.443 08:59:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.704 08:59:36 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:57.704 08:59:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.704 08:59:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.704 08:59:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.704 ************************************ 00:05:57.704 START TEST thread 00:05:57.704 ************************************ 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:57.704 * Looking for test storage... 00:05:57.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.704 08:59:36 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.704 08:59:36 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.704 08:59:36 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.704 08:59:36 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.704 08:59:36 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.704 08:59:36 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.704 08:59:36 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.704 08:59:36 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.704 08:59:36 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.704 08:59:36 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.704 08:59:36 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.704 08:59:36 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:57.704 08:59:36 thread -- scripts/common.sh@345 -- # : 1 00:05:57.704 08:59:36 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.704 08:59:36 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.704 08:59:36 thread -- scripts/common.sh@365 -- # decimal 1 00:05:57.704 08:59:36 thread -- scripts/common.sh@353 -- # local d=1 00:05:57.704 08:59:36 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.704 08:59:36 thread -- scripts/common.sh@355 -- # echo 1 00:05:57.704 08:59:36 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.704 08:59:36 thread -- scripts/common.sh@366 -- # decimal 2 00:05:57.704 08:59:36 thread -- scripts/common.sh@353 -- # local d=2 00:05:57.704 08:59:36 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.704 08:59:36 thread -- scripts/common.sh@355 -- # echo 2 00:05:57.704 08:59:36 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.704 08:59:36 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.704 08:59:36 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.704 08:59:36 thread -- scripts/common.sh@368 -- # return 0 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.704 08:59:36 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.704 --rc genhtml_branch_coverage=1 00:05:57.704 --rc genhtml_function_coverage=1 00:05:57.704 --rc genhtml_legend=1 00:05:57.704 --rc geninfo_all_blocks=1 00:05:57.704 --rc geninfo_unexecuted_blocks=1 00:05:57.705 00:05:57.705 ' 00:05:57.705 08:59:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.705 --rc genhtml_branch_coverage=1 00:05:57.705 --rc genhtml_function_coverage=1 00:05:57.705 --rc genhtml_legend=1 00:05:57.705 --rc geninfo_all_blocks=1 00:05:57.705 --rc geninfo_unexecuted_blocks=1 00:05:57.705 00:05:57.705 ' 00:05:57.705 08:59:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.705 --rc genhtml_branch_coverage=1 00:05:57.705 --rc genhtml_function_coverage=1 00:05:57.705 --rc genhtml_legend=1 00:05:57.705 --rc geninfo_all_blocks=1 00:05:57.705 --rc geninfo_unexecuted_blocks=1 00:05:57.705 00:05:57.705 ' 00:05:57.705 08:59:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.705 --rc genhtml_branch_coverage=1 00:05:57.705 --rc genhtml_function_coverage=1 00:05:57.705 --rc genhtml_legend=1 00:05:57.705 --rc geninfo_all_blocks=1 00:05:57.705 --rc geninfo_unexecuted_blocks=1 00:05:57.705 00:05:57.705 ' 00:05:57.705 08:59:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.705 08:59:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:57.705 08:59:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.705 08:59:36 thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.705 ************************************ 00:05:57.705 START TEST thread_poller_perf 00:05:57.705 ************************************ 00:05:57.705 08:59:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:57.966 [2024-11-20 08:59:36.622151] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:57.966 [2024-11-20 08:59:36.622503] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59559 ] 00:05:57.966 [2024-11-20 08:59:36.785509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.228 [2024-11-20 08:59:36.922066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.228 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:59.619 [2024-11-20T08:59:38.538Z] ====================================== 00:05:59.619 [2024-11-20T08:59:38.538Z] busy:2609604748 (cyc) 00:05:59.619 [2024-11-20T08:59:38.538Z] total_run_count: 304000 00:05:59.619 [2024-11-20T08:59:38.538Z] tsc_hz: 2600000000 (cyc) 00:05:59.620 [2024-11-20T08:59:38.539Z] ====================================== 00:05:59.620 [2024-11-20T08:59:38.539Z] poller_cost: 8584 (cyc), 3301 (nsec) 00:05:59.620 00:05:59.620 real 0m1.543s 00:05:59.620 user 0m1.341s 00:05:59.620 sys 0m0.088s 00:05:59.620 08:59:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.620 ************************************ 00:05:59.620 08:59:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:59.620 END TEST thread_poller_perf 00:05:59.620 ************************************ 00:05:59.620 08:59:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.620 08:59:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:59.620 08:59:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.620 08:59:38 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.620 ************************************ 00:05:59.620 START TEST thread_poller_perf 00:05:59.620 ************************************ 00:05:59.620 08:59:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:59.620 [2024-11-20 08:59:38.236269] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:05:59.620 [2024-11-20 08:59:38.236415] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59601 ] 00:05:59.620 [2024-11-20 08:59:38.405020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.881 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:59.881 [2024-11-20 08:59:38.546853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.823 [2024-11-20T08:59:39.742Z] ====================================== 00:06:00.823 [2024-11-20T08:59:39.742Z] busy:2603684338 (cyc) 00:06:00.823 [2024-11-20T08:59:39.742Z] total_run_count: 3908000 00:06:00.823 [2024-11-20T08:59:39.742Z] tsc_hz: 2600000000 (cyc) 00:06:00.823 [2024-11-20T08:59:39.742Z] ====================================== 00:06:00.823 [2024-11-20T08:59:39.742Z] poller_cost: 666 (cyc), 256 (nsec) 00:06:00.823 ************************************ 00:06:00.823 END TEST thread_poller_perf 00:06:00.823 ************************************ 00:06:00.823 00:06:00.823 real 0m1.522s 00:06:00.823 user 0m1.325s 00:06:00.823 sys 0m0.087s 00:06:00.823 08:59:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.823 08:59:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.085 08:59:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:01.085 ************************************ 00:06:01.085 END TEST thread 00:06:01.085 ************************************ 00:06:01.085 00:06:01.085 real 0m3.379s 00:06:01.085 user 0m2.799s 00:06:01.085 sys 0m0.305s 00:06:01.085 08:59:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.085 08:59:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.085 08:59:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:01.085 08:59:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.085 08:59:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.085 08:59:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.085 08:59:39 -- common/autotest_common.sh@10 -- # set +x 00:06:01.085 ************************************ 00:06:01.085 START TEST app_cmdline 00:06:01.085 ************************************ 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:01.085 * Looking for test storage... 00:06:01.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:01.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.085 08:59:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.085 --rc genhtml_branch_coverage=1 00:06:01.085 --rc genhtml_function_coverage=1 00:06:01.085 --rc genhtml_legend=1 00:06:01.085 --rc geninfo_all_blocks=1 00:06:01.085 --rc geninfo_unexecuted_blocks=1 00:06:01.085 00:06:01.085 ' 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.085 --rc genhtml_branch_coverage=1 00:06:01.085 --rc genhtml_function_coverage=1 00:06:01.085 --rc genhtml_legend=1 00:06:01.085 --rc geninfo_all_blocks=1 00:06:01.085 --rc geninfo_unexecuted_blocks=1 00:06:01.085 00:06:01.085 ' 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.085 --rc genhtml_branch_coverage=1 00:06:01.085 --rc genhtml_function_coverage=1 00:06:01.085 --rc genhtml_legend=1 00:06:01.085 --rc geninfo_all_blocks=1 00:06:01.085 --rc geninfo_unexecuted_blocks=1 00:06:01.085 00:06:01.085 ' 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.085 --rc genhtml_branch_coverage=1 00:06:01.085 --rc genhtml_function_coverage=1 00:06:01.085 --rc genhtml_legend=1 00:06:01.085 --rc geninfo_all_blocks=1 00:06:01.085 --rc geninfo_unexecuted_blocks=1 00:06:01.085 00:06:01.085 ' 00:06:01.085 08:59:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:01.085 08:59:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59684 00:06:01.085 08:59:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59684 00:06:01.085 08:59:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59684 ']' 00:06:01.086 08:59:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.086 08:59:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.086 08:59:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.086 08:59:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.086 08:59:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:01.086 08:59:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:01.347 [2024-11-20 08:59:40.083004] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:01.347 [2024-11-20 08:59:40.083798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59684 ] 00:06:01.347 [2024-11-20 08:59:40.249116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.610 [2024-11-20 08:59:40.390354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:02.554 { 00:06:02.554 "version": "SPDK v25.01-pre git sha1 4f0cbdcd1", 00:06:02.554 "fields": { 00:06:02.554 "major": 25, 00:06:02.554 "minor": 1, 00:06:02.554 "patch": 0, 00:06:02.554 "suffix": "-pre", 00:06:02.554 "commit": "4f0cbdcd1" 00:06:02.554 } 00:06:02.554 } 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:02.554 08:59:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:02.554 08:59:41 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:02.815 request: 00:06:02.815 { 00:06:02.815 "method": "env_dpdk_get_mem_stats", 00:06:02.815 "req_id": 1 00:06:02.815 } 00:06:02.815 Got JSON-RPC error response 00:06:02.815 response: 00:06:02.815 { 00:06:02.815 "code": -32601, 00:06:02.816 "message": "Method not found" 00:06:02.816 } 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.816 08:59:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59684 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59684 ']' 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59684 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59684 00:06:02.816 killing process with pid 59684 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59684' 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 59684 00:06:02.816 08:59:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 59684 00:06:04.731 ************************************ 00:06:04.731 END TEST app_cmdline 00:06:04.731 ************************************ 00:06:04.731 00:06:04.731 real 0m3.495s 00:06:04.731 user 0m3.753s 00:06:04.731 sys 0m0.554s 00:06:04.731 08:59:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.731 08:59:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:04.731 08:59:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.731 08:59:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.731 08:59:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.731 08:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.731 ************************************ 00:06:04.731 START TEST version 00:06:04.731 ************************************ 00:06:04.731 08:59:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:04.731 * Looking for test storage... 00:06:04.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:04.731 08:59:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.731 08:59:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.731 08:59:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.731 08:59:43 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.731 08:59:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.731 08:59:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.731 08:59:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.731 08:59:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.731 08:59:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.732 08:59:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.732 08:59:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.732 08:59:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.732 08:59:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.732 08:59:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.732 08:59:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.732 08:59:43 version -- scripts/common.sh@344 -- # case "$op" in 00:06:04.732 08:59:43 version -- scripts/common.sh@345 -- # : 1 00:06:04.732 08:59:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.732 08:59:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.732 08:59:43 version -- scripts/common.sh@365 -- # decimal 1 00:06:04.732 08:59:43 version -- scripts/common.sh@353 -- # local d=1 00:06:04.732 08:59:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.732 08:59:43 version -- scripts/common.sh@355 -- # echo 1 00:06:04.732 08:59:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.732 08:59:43 version -- scripts/common.sh@366 -- # decimal 2 00:06:04.732 08:59:43 version -- scripts/common.sh@353 -- # local d=2 00:06:04.732 08:59:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.732 08:59:43 version -- scripts/common.sh@355 -- # echo 2 00:06:04.732 08:59:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.732 08:59:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.732 08:59:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.732 08:59:43 version -- scripts/common.sh@368 -- # return 0 00:06:04.732 08:59:43 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.732 08:59:43 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.732 --rc genhtml_branch_coverage=1 00:06:04.732 --rc genhtml_function_coverage=1 00:06:04.732 --rc genhtml_legend=1 00:06:04.732 --rc geninfo_all_blocks=1 00:06:04.732 --rc geninfo_unexecuted_blocks=1 00:06:04.732 00:06:04.732 ' 00:06:04.732 08:59:43 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.732 --rc genhtml_branch_coverage=1 00:06:04.732 --rc genhtml_function_coverage=1 00:06:04.732 --rc genhtml_legend=1 00:06:04.732 --rc geninfo_all_blocks=1 00:06:04.732 --rc geninfo_unexecuted_blocks=1 00:06:04.732 00:06:04.732 ' 00:06:04.732 08:59:43 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.732 --rc genhtml_branch_coverage=1 00:06:04.732 --rc genhtml_function_coverage=1 00:06:04.732 --rc genhtml_legend=1 00:06:04.732 --rc geninfo_all_blocks=1 00:06:04.732 --rc geninfo_unexecuted_blocks=1 00:06:04.732 00:06:04.732 ' 00:06:04.732 08:59:43 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.732 --rc genhtml_branch_coverage=1 00:06:04.732 --rc genhtml_function_coverage=1 00:06:04.732 --rc genhtml_legend=1 00:06:04.732 --rc geninfo_all_blocks=1 00:06:04.732 --rc geninfo_unexecuted_blocks=1 00:06:04.732 00:06:04.732 ' 00:06:04.732 08:59:43 version -- app/version.sh@17 -- # get_header_version major 00:06:04.732 08:59:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # cut -f2 00:06:04.732 08:59:43 version -- app/version.sh@17 -- # major=25 00:06:04.732 08:59:43 version -- app/version.sh@18 -- # get_header_version minor 00:06:04.732 08:59:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # cut -f2 00:06:04.732 08:59:43 version -- app/version.sh@18 -- # minor=1 00:06:04.732 08:59:43 version -- app/version.sh@19 -- # get_header_version patch 00:06:04.732 08:59:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # cut -f2 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.732 08:59:43 version -- app/version.sh@19 -- # patch=0 00:06:04.732 08:59:43 version -- app/version.sh@20 -- # get_header_version suffix 00:06:04.732 08:59:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:04.732 08:59:43 version -- app/version.sh@14 -- # cut -f2 00:06:04.732 08:59:43 version -- app/version.sh@20 -- # suffix=-pre 00:06:04.732 08:59:43 version -- app/version.sh@22 -- # version=25.1 00:06:04.732 08:59:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:04.732 08:59:43 version -- app/version.sh@28 -- # version=25.1rc0 00:06:04.732 08:59:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:04.732 08:59:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:04.732 08:59:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:04.732 08:59:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:04.732 ************************************ 00:06:04.732 END TEST version 00:06:04.732 ************************************ 00:06:04.732 00:06:04.732 real 0m0.227s 00:06:04.732 user 0m0.140s 00:06:04.732 sys 0m0.117s 00:06:04.732 08:59:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.732 08:59:43 version -- common/autotest_common.sh@10 -- # set +x 00:06:04.993 08:59:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:04.993 08:59:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:04.993 08:59:43 -- spdk/autotest.sh@194 -- # uname -s 00:06:04.993 08:59:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:04.993 08:59:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.993 08:59:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:04.993 08:59:43 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:04.993 08:59:43 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:04.993 08:59:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:04.993 08:59:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.993 08:59:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.993 ************************************ 00:06:04.993 START TEST blockdev_nvme 00:06:04.993 ************************************ 00:06:04.993 08:59:43 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:04.993 * Looking for test storage... 00:06:04.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:04.993 08:59:43 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:04.993 08:59:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:04.993 08:59:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:04.993 08:59:43 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:04.993 08:59:43 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:04.993 08:59:43 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:04.993 08:59:43 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:04.994 08:59:43 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:04.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.994 --rc genhtml_branch_coverage=1 00:06:04.994 --rc genhtml_function_coverage=1 00:06:04.994 --rc genhtml_legend=1 00:06:04.994 --rc geninfo_all_blocks=1 00:06:04.994 --rc geninfo_unexecuted_blocks=1 00:06:04.994 00:06:04.994 ' 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:04.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.994 --rc genhtml_branch_coverage=1 00:06:04.994 --rc genhtml_function_coverage=1 00:06:04.994 --rc genhtml_legend=1 00:06:04.994 --rc geninfo_all_blocks=1 00:06:04.994 --rc geninfo_unexecuted_blocks=1 00:06:04.994 00:06:04.994 ' 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:04.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.994 --rc genhtml_branch_coverage=1 00:06:04.994 --rc genhtml_function_coverage=1 00:06:04.994 --rc genhtml_legend=1 00:06:04.994 --rc geninfo_all_blocks=1 00:06:04.994 --rc geninfo_unexecuted_blocks=1 00:06:04.994 00:06:04.994 ' 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:04.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.994 --rc genhtml_branch_coverage=1 00:06:04.994 --rc genhtml_function_coverage=1 00:06:04.994 --rc genhtml_legend=1 00:06:04.994 --rc geninfo_all_blocks=1 00:06:04.994 --rc geninfo_unexecuted_blocks=1 00:06:04.994 00:06:04.994 ' 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:04.994 08:59:43 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59862 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59862 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59862 ']' 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.994 08:59:43 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:04.994 08:59:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.256 [2024-11-20 08:59:43.977710] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:05.256 [2024-11-20 08:59:43.978169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59862 ] 00:06:05.256 [2024-11-20 08:59:44.154388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.517 [2024-11-20 08:59:44.298515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.462 08:59:45 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.462 08:59:45 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:06.462 08:59:45 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:06.462 08:59:45 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:06.462 08:59:45 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:06.462 08:59:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:06.462 08:59:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.462 08:59:45 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:06.462 08:59:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.462 08:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:06.725 08:59:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:06.725 08:59:45 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:06.726 08:59:45 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "b4f2b91e-0028-49e5-b65d-2ead20a8f04c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "b4f2b91e-0028-49e5-b65d-2ead20a8f04c",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "3023d9e2-0edd-443a-b975-3260e339b207"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "3023d9e2-0edd-443a-b975-3260e339b207",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2af9ce90-cf22-4dd7-b9b7-ff1c1d6d739b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2af9ce90-cf22-4dd7-b9b7-ff1c1d6d739b",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "29b2656e-8205-4260-a426-44bce76da876"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "29b2656e-8205-4260-a426-44bce76da876",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "11456339-0a90-4cad-9e08-1f206f733589"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "11456339-0a90-4cad-9e08-1f206f733589",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "37c50a53-9d4c-410c-81df-8d36b994925f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "37c50a53-9d4c-410c-81df-8d36b994925f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:06.726 08:59:45 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:06.726 08:59:45 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:06.726 08:59:45 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:06.726 08:59:45 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59862 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59862 ']' 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59862 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59862 00:06:06.726 killing process with pid 59862 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59862' 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59862 00:06:06.726 08:59:45 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59862 00:06:08.646 08:59:47 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:08.646 08:59:47 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:08.646 08:59:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:08.646 08:59:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.646 08:59:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:08.646 ************************************ 00:06:08.646 START TEST bdev_hello_world 00:06:08.646 ************************************ 00:06:08.646 08:59:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:08.646 [2024-11-20 08:59:47.273437] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:08.646 [2024-11-20 08:59:47.273595] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59951 ] 00:06:08.646 [2024-11-20 08:59:47.440372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.908 [2024-11-20 08:59:47.580171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.479 [2024-11-20 08:59:48.187018] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:09.479 [2024-11-20 08:59:48.187080] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:09.479 [2024-11-20 08:59:48.187107] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:09.479 [2024-11-20 08:59:48.189975] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:09.479 [2024-11-20 08:59:48.191308] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:09.479 [2024-11-20 08:59:48.191366] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:09.479 [2024-11-20 08:59:48.191510] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:09.479 00:06:09.479 [2024-11-20 08:59:48.191540] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:10.421 ************************************ 00:06:10.421 END TEST bdev_hello_world 00:06:10.421 ************************************ 00:06:10.421 00:06:10.421 real 0m1.824s 00:06:10.421 user 0m1.477s 00:06:10.421 sys 0m0.231s 00:06:10.421 08:59:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.421 08:59:49 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:10.421 08:59:49 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:10.421 08:59:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.421 08:59:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.421 08:59:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:10.421 ************************************ 00:06:10.421 START TEST bdev_bounds 00:06:10.421 ************************************ 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59988 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.421 Process bdevio pid: 59988 00:06:10.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59988' 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59988 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59988 ']' 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.421 08:59:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:10.421 [2024-11-20 08:59:49.170384] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:10.421 [2024-11-20 08:59:49.170774] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59988 ] 00:06:10.421 [2024-11-20 08:59:49.336922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.699 [2024-11-20 08:59:49.472640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.699 [2024-11-20 08:59:49.473037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.699 [2024-11-20 08:59:49.473044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.271 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.271 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:11.271 08:59:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:11.531 I/O targets: 00:06:11.531 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:11.531 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:11.531 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.531 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.531 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:11.532 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:11.532 00:06:11.532 00:06:11.532 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.532 http://cunit.sourceforge.net/ 00:06:11.532 00:06:11.532 00:06:11.532 Suite: bdevio tests on: Nvme3n1 00:06:11.532 Test: blockdev write read block ...passed 00:06:11.532 Test: blockdev write zeroes read block ...passed 00:06:11.532 Test: blockdev write zeroes read no split ...passed 00:06:11.532 Test: blockdev write zeroes read split ...passed 00:06:11.532 Test: blockdev write zeroes read split partial ...passed 00:06:11.532 Test: blockdev reset ...[2024-11-20 08:59:50.260209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:11.532 [2024-11-20 08:59:50.265461] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:11.532 Test: blockdev write read 8 blocks ...uccessful. 00:06:11.532 passed 00:06:11.532 Test: blockdev write read size > 128k ...passed 00:06:11.532 Test: blockdev write read invalid size ...passed 00:06:11.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.532 Test: blockdev write read max offset ...passed 00:06:11.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.532 Test: blockdev writev readv 8 blocks ...passed 00:06:11.532 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.532 Test: blockdev writev readv block ...passed 00:06:11.532 Test: blockdev writev readv size > 128k ...passed 00:06:11.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.532 Test: blockdev comparev and writev ...[2024-11-20 08:59:50.286483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b620a000 len:0x1000 00:06:11.532 [2024-11-20 08:59:50.286552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.532 passed 00:06:11.532 Test: blockdev nvme passthru rw ...passed 00:06:11.532 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.532 Test: blockdev nvme admin passthru ...[2024-11-20 08:59:50.289074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.532 [2024-11-20 08:59:50.289125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.532 passed 00:06:11.532 Test: blockdev copy ...passed 00:06:11.532 Suite: bdevio tests on: Nvme2n3 00:06:11.532 Test: blockdev write read block ...passed 00:06:11.532 Test: blockdev write zeroes read block ...passed 00:06:11.532 Test: blockdev write zeroes read no split ...passed 00:06:11.532 Test: blockdev write zeroes read split ...passed 00:06:11.532 Test: blockdev write zeroes read split partial ...passed 00:06:11.532 Test: blockdev reset ...[2024-11-20 08:59:50.365783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.532 [2024-11-20 08:59:50.372463] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:11.532 Test: blockdev write read 8 blocks ...uccessful. 00:06:11.532 passed 00:06:11.532 Test: blockdev write read size > 128k ...passed 00:06:11.532 Test: blockdev write read invalid size ...passed 00:06:11.532 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.532 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.532 Test: blockdev write read max offset ...passed 00:06:11.532 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.532 Test: blockdev writev readv 8 blocks ...passed 00:06:11.532 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.532 Test: blockdev writev readv block ...passed 00:06:11.532 Test: blockdev writev readv size > 128k ...passed 00:06:11.532 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.532 Test: blockdev comparev and writev ...[2024-11-20 08:59:50.390896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ba606000 len:0x1000 00:06:11.532 [2024-11-20 08:59:50.390967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.532 passed 00:06:11.532 Test: blockdev nvme passthru rw ...passed 00:06:11.532 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:59:50.392815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.532 [2024-11-20 08:59:50.392857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.532 passed 00:06:11.532 Test: blockdev nvme admin passthru ...passed 00:06:11.532 Test: blockdev copy ...passed 00:06:11.532 Suite: bdevio tests on: Nvme2n2 00:06:11.532 Test: blockdev write read block ...passed 00:06:11.532 Test: blockdev write zeroes read block ...passed 00:06:11.532 Test: blockdev write zeroes read no split ...passed 00:06:11.532 Test: blockdev write zeroes read split ...passed 00:06:11.794 Test: blockdev write zeroes read split partial ...passed 00:06:11.794 Test: blockdev reset ...[2024-11-20 08:59:50.478480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.794 [2024-11-20 08:59:50.482368] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:11.794 passed 00:06:11.794 Test: blockdev write read 8 blocks ...passed 00:06:11.794 Test: blockdev write read size > 128k ...passed 00:06:11.794 Test: blockdev write read invalid size ...passed 00:06:11.794 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.794 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.794 Test: blockdev write read max offset ...passed 00:06:11.794 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.794 Test: blockdev writev readv 8 blocks ...passed 00:06:11.794 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.794 Test: blockdev writev readv block ...passed 00:06:11.794 Test: blockdev writev readv size > 128k ...passed 00:06:11.794 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.794 Test: blockdev comparev and writev ...[2024-11-20 08:59:50.501078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2c3c000 len:0x1000 00:06:11.794 [2024-11-20 08:59:50.501148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.794 passed 00:06:11.794 Test: blockdev nvme passthru rw ...passed 00:06:11.794 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.794 Test: blockdev nvme admin passthru ...[2024-11-20 08:59:50.502778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.794 [2024-11-20 08:59:50.502823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.794 passed 00:06:11.794 Test: blockdev copy ...passed 00:06:11.794 Suite: bdevio tests on: Nvme2n1 00:06:11.794 Test: blockdev write read block ...passed 00:06:11.794 Test: blockdev write zeroes read block ...passed 00:06:11.794 Test: blockdev write zeroes read no split ...passed 00:06:11.794 Test: blockdev write zeroes read split ...passed 00:06:11.794 Test: blockdev write zeroes read split partial ...passed 00:06:11.794 Test: blockdev reset ...[2024-11-20 08:59:50.572159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:11.794 [2024-11-20 08:59:50.578715] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:06:11.794 00:06:11.794 Test: blockdev write read 8 blocks ...passed 00:06:11.794 Test: blockdev write read size > 128k ...passed 00:06:11.794 Test: blockdev write read invalid size ...passed 00:06:11.794 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.794 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.794 Test: blockdev write read max offset ...passed 00:06:11.794 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.794 Test: blockdev writev readv 8 blocks ...passed 00:06:11.794 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.794 Test: blockdev writev readv block ...passed 00:06:11.794 Test: blockdev writev readv size > 128k ...passed 00:06:11.794 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.794 Test: blockdev comparev and writev ...[2024-11-20 08:59:50.600237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2c38000 len:0x1000 00:06:11.794 [2024-11-20 08:59:50.600455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.794 passed 00:06:11.794 Test: blockdev nvme passthru rw ...passed 00:06:11.794 Test: blockdev nvme passthru vendor specific ...passed 00:06:11.794 Test: blockdev nvme admin passthru ...[2024-11-20 08:59:50.602600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:11.795 [2024-11-20 08:59:50.602655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:11.795 passed 00:06:11.795 Test: blockdev copy ...passed 00:06:11.795 Suite: bdevio tests on: Nvme1n1 00:06:11.795 Test: blockdev write read block ...passed 00:06:11.795 Test: blockdev write zeroes read block ...passed 00:06:11.795 Test: blockdev write zeroes read no split ...passed 00:06:11.795 Test: blockdev write zeroes read split ...passed 00:06:11.795 Test: blockdev write zeroes read split partial ...passed 00:06:11.795 Test: blockdev reset ...[2024-11-20 08:59:50.676052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:11.795 [2024-11-20 08:59:50.680948] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:11.795 Test: blockdev write read 8 blocks ...uccessful. 00:06:11.795 passed 00:06:11.795 Test: blockdev write read size > 128k ...passed 00:06:11.795 Test: blockdev write read invalid size ...passed 00:06:11.795 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:11.795 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:11.795 Test: blockdev write read max offset ...passed 00:06:11.795 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:11.795 Test: blockdev writev readv 8 blocks ...passed 00:06:11.795 Test: blockdev writev readv 30 x 1block ...passed 00:06:11.795 Test: blockdev writev readv block ...passed 00:06:11.795 Test: blockdev writev readv size > 128k ...passed 00:06:11.795 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:11.795 Test: blockdev comparev and writev ...[2024-11-20 08:59:50.703759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d2c34000 len:0x1000 00:06:11.795 [2024-11-20 08:59:50.703856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:11.795 passed 00:06:11.795 Test: blockdev nvme passthru rw ...passed 00:06:11.795 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:59:50.706109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:11.795 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:11.795 [2024-11-20 08:59:50.706294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:12.055 passed 00:06:12.055 Test: blockdev copy ...passed 00:06:12.055 Suite: bdevio tests on: Nvme0n1 00:06:12.055 Test: blockdev write read block ...passed 00:06:12.055 Test: blockdev write zeroes read block ...passed 00:06:12.055 Test: blockdev write zeroes read no split ...passed 00:06:12.055 Test: blockdev write zeroes read split ...passed 00:06:12.055 Test: blockdev write zeroes read split partial ...passed 00:06:12.056 Test: blockdev reset ...[2024-11-20 08:59:50.781603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:12.056 passed 00:06:12.056 Test: blockdev write read 8 blocks ...[2024-11-20 08:59:50.785716] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:12.056 passed 00:06:12.056 Test: blockdev write read size > 128k ...passed 00:06:12.056 Test: blockdev write read invalid size ...passed 00:06:12.056 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:12.056 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:12.056 Test: blockdev write read max offset ...passed 00:06:12.056 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:12.056 Test: blockdev writev readv 8 blocks ...passed 00:06:12.056 Test: blockdev writev readv 30 x 1block ...passed 00:06:12.056 Test: blockdev writev readv block ...passed 00:06:12.056 Test: blockdev writev readv size > 128k ...passed 00:06:12.056 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:12.056 Test: blockdev comparev and writev ...passed 00:06:12.056 Test: blockdev nvme passthru rw ...[2024-11-20 08:59:50.803697] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:12.056 separate metadata which is not supported yet. 00:06:12.056 passed 00:06:12.056 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:59:50.805303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:12.056 [2024-11-20 08:59:50.805402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:12.056 passed 00:06:12.056 Test: blockdev nvme admin passthru ...passed 00:06:12.056 Test: blockdev copy ...passed 00:06:12.056 00:06:12.056 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.056 suites 6 6 n/a 0 0 00:06:12.056 tests 138 138 138 0 0 00:06:12.056 asserts 893 893 893 0 n/a 00:06:12.056 00:06:12.056 Elapsed time = 1.561 seconds 00:06:12.056 0 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59988 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59988 ']' 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59988 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59988 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.056 killing process with pid 59988 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59988' 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59988 00:06:12.056 08:59:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59988 00:06:12.999 ************************************ 00:06:12.999 END TEST bdev_bounds 00:06:12.999 ************************************ 00:06:12.999 08:59:51 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:12.999 00:06:12.999 real 0m2.516s 00:06:12.999 user 0m6.216s 00:06:12.999 sys 0m0.400s 00:06:12.999 08:59:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.999 08:59:51 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:12.999 08:59:51 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:12.999 08:59:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:12.999 08:59:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.999 08:59:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.999 ************************************ 00:06:12.999 START TEST bdev_nbd 00:06:12.999 ************************************ 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60053 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60053 /var/tmp/spdk-nbd.sock 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60053 ']' 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:12.999 08:59:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:12.999 [2024-11-20 08:59:51.764018] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:12.999 [2024-11-20 08:59:51.764186] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.259 [2024-11-20 08:59:51.931589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.259 [2024-11-20 08:59:52.074604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:13.831 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.092 08:59:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.092 1+0 records in 00:06:14.092 1+0 records out 00:06:14.092 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145492 s, 2.8 MB/s 00:06:14.092 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.353 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.615 1+0 records in 00:06:14.615 1+0 records out 00:06:14.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000802671 s, 5.1 MB/s 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.615 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:14.878 1+0 records in 00:06:14.878 1+0 records out 00:06:14.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140965 s, 2.9 MB/s 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:14.878 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.140 1+0 records in 00:06:15.140 1+0 records out 00:06:15.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00122356 s, 3.3 MB/s 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.140 08:59:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:15.140 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:15.140 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.403 1+0 records in 00:06:15.403 1+0 records out 00:06:15.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123394 s, 3.3 MB/s 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:15.403 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:15.665 1+0 records in 00:06:15.665 1+0 records out 00:06:15.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00150527 s, 2.7 MB/s 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd0", 00:06:15.665 "bdev_name": "Nvme0n1" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd1", 00:06:15.665 "bdev_name": "Nvme1n1" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd2", 00:06:15.665 "bdev_name": "Nvme2n1" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd3", 00:06:15.665 "bdev_name": "Nvme2n2" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd4", 00:06:15.665 "bdev_name": "Nvme2n3" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd5", 00:06:15.665 "bdev_name": "Nvme3n1" 00:06:15.665 } 00:06:15.665 ]' 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd0", 00:06:15.665 "bdev_name": "Nvme0n1" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd1", 00:06:15.665 "bdev_name": "Nvme1n1" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd2", 00:06:15.665 "bdev_name": "Nvme2n1" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd3", 00:06:15.665 "bdev_name": "Nvme2n2" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd4", 00:06:15.665 "bdev_name": "Nvme2n3" 00:06:15.665 }, 00:06:15.665 { 00:06:15.665 "nbd_device": "/dev/nbd5", 00:06:15.665 "bdev_name": "Nvme3n1" 00:06:15.665 } 00:06:15.665 ]' 00:06:15.665 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.946 08:59:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.207 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.468 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.730 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.991 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:17.253 08:59:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.253 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.515 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:17.777 /dev/nbd0 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.777 1+0 records in 00:06:17.777 1+0 records out 00:06:17.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00131196 s, 3.1 MB/s 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:17.777 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:18.037 /dev/nbd1 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.037 1+0 records in 00:06:18.037 1+0 records out 00:06:18.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118028 s, 3.5 MB/s 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.037 08:59:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:18.300 /dev/nbd10 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.300 1+0 records in 00:06:18.300 1+0 records out 00:06:18.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00113944 s, 3.6 MB/s 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.300 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:18.561 /dev/nbd11 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.561 1+0 records in 00:06:18.561 1+0 records out 00:06:18.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115001 s, 3.6 MB/s 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.561 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.562 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.562 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.562 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.562 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.562 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.562 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:18.822 /dev/nbd12 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.822 1+0 records in 00:06:18.822 1+0 records out 00:06:18.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00132892 s, 3.1 MB/s 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.822 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.823 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.823 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.823 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.823 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:18.823 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:19.084 /dev/nbd13 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:19.084 1+0 records in 00:06:19.084 1+0 records out 00:06:19.084 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014236 s, 2.9 MB/s 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.084 08:59:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.344 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.344 { 00:06:19.344 "nbd_device": "/dev/nbd0", 00:06:19.344 "bdev_name": "Nvme0n1" 00:06:19.344 }, 00:06:19.344 { 00:06:19.344 "nbd_device": "/dev/nbd1", 00:06:19.344 "bdev_name": "Nvme1n1" 00:06:19.344 }, 00:06:19.344 { 00:06:19.344 "nbd_device": "/dev/nbd10", 00:06:19.344 "bdev_name": "Nvme2n1" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd11", 00:06:19.345 "bdev_name": "Nvme2n2" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd12", 00:06:19.345 "bdev_name": "Nvme2n3" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd13", 00:06:19.345 "bdev_name": "Nvme3n1" 00:06:19.345 } 00:06:19.345 ]' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd0", 00:06:19.345 "bdev_name": "Nvme0n1" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd1", 00:06:19.345 "bdev_name": "Nvme1n1" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd10", 00:06:19.345 "bdev_name": "Nvme2n1" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd11", 00:06:19.345 "bdev_name": "Nvme2n2" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd12", 00:06:19.345 "bdev_name": "Nvme2n3" 00:06:19.345 }, 00:06:19.345 { 00:06:19.345 "nbd_device": "/dev/nbd13", 00:06:19.345 "bdev_name": "Nvme3n1" 00:06:19.345 } 00:06:19.345 ]' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.345 /dev/nbd1 00:06:19.345 /dev/nbd10 00:06:19.345 /dev/nbd11 00:06:19.345 /dev/nbd12 00:06:19.345 /dev/nbd13' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.345 /dev/nbd1 00:06:19.345 /dev/nbd10 00:06:19.345 /dev/nbd11 00:06:19.345 /dev/nbd12 00:06:19.345 /dev/nbd13' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:19.345 256+0 records in 00:06:19.345 256+0 records out 00:06:19.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00993713 s, 106 MB/s 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.345 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.605 256+0 records in 00:06:19.605 256+0 records out 00:06:19.605 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223304 s, 4.7 MB/s 00:06:19.605 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.605 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.866 256+0 records in 00:06:19.866 256+0 records out 00:06:19.866 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.289946 s, 3.6 MB/s 00:06:19.866 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.866 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:20.128 256+0 records in 00:06:20.128 256+0 records out 00:06:20.128 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.281431 s, 3.7 MB/s 00:06:20.128 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.128 08:59:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:20.393 256+0 records in 00:06:20.393 256+0 records out 00:06:20.393 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.284902 s, 3.7 MB/s 00:06:20.393 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.393 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:20.653 256+0 records in 00:06:20.653 256+0 records out 00:06:20.653 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.282663 s, 3.7 MB/s 00:06:20.653 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.653 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:20.914 256+0 records in 00:06:20.914 256+0 records out 00:06:20.914 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224695 s, 4.7 MB/s 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.914 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.177 08:59:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.438 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.698 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:21.960 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.961 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.961 09:00:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:22.221 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.481 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.740 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:22.740 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:22.740 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.740 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:22.740 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:22.740 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:22.741 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:22.999 malloc_lvol_verify 00:06:22.999 09:00:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:23.259 704446aa-ada7-4c0a-a449-287cde955141 00:06:23.259 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:23.518 074fe4a8-fe02-43b4-8907-13aeea196dbf 00:06:23.518 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:23.779 /dev/nbd0 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:23.779 mke2fs 1.47.0 (5-Feb-2023) 00:06:23.779 Discarding device blocks: 0/4096 done 00:06:23.779 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:23.779 00:06:23.779 Allocating group tables: 0/1 done 00:06:23.779 Writing inode tables: 0/1 done 00:06:23.779 Creating journal (1024 blocks): done 00:06:23.779 Writing superblocks and filesystem accounting information: 0/1 done 00:06:23.779 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.779 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60053 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60053 ']' 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60053 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60053 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.040 killing process with pid 60053 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60053' 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60053 00:06:24.040 09:00:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60053 00:06:25.061 09:00:03 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:25.061 00:06:25.061 real 0m12.157s 00:06:25.061 user 0m16.472s 00:06:25.061 sys 0m3.963s 00:06:25.061 ************************************ 00:06:25.061 END TEST bdev_nbd 00:06:25.061 ************************************ 00:06:25.061 09:00:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.061 09:00:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:25.061 skipping fio tests on NVMe due to multi-ns failures. 00:06:25.061 09:00:03 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:25.061 09:00:03 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:25.061 09:00:03 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:25.061 09:00:03 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:25.061 09:00:03 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.061 09:00:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:25.061 09:00:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.061 09:00:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.061 ************************************ 00:06:25.061 START TEST bdev_verify 00:06:25.061 ************************************ 00:06:25.061 09:00:03 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:25.360 [2024-11-20 09:00:03.959991] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:25.360 [2024-11-20 09:00:03.960118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60448 ] 00:06:25.360 [2024-11-20 09:00:04.119536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.360 [2024-11-20 09:00:04.245188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.360 [2024-11-20 09:00:04.245198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.930 Running I/O for 5 seconds... 00:06:28.253 18752.00 IOPS, 73.25 MiB/s [2024-11-20T09:00:08.112Z] 18720.00 IOPS, 73.12 MiB/s [2024-11-20T09:00:09.496Z] 18112.00 IOPS, 70.75 MiB/s [2024-11-20T09:00:10.082Z] 18240.00 IOPS, 71.25 MiB/s [2024-11-20T09:00:10.082Z] 18496.00 IOPS, 72.25 MiB/s 00:06:31.163 Latency(us) 00:06:31.163 [2024-11-20T09:00:10.082Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:31.163 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x0 length 0xbd0bd 00:06:31.163 Nvme0n1 : 5.06 1519.18 5.93 0.00 0.00 83948.35 17845.96 104051.00 00:06:31.163 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:31.163 Nvme0n1 : 5.09 1509.71 5.90 0.00 0.00 83770.25 17341.83 71383.83 00:06:31.163 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x0 length 0xa0000 00:06:31.163 Nvme1n1 : 5.06 1518.74 5.93 0.00 0.00 83735.91 16736.89 90742.15 00:06:31.163 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0xa0000 length 0xa0000 00:06:31.163 Nvme1n1 : 5.09 1509.31 5.90 0.00 0.00 83650.26 17543.48 72997.02 00:06:31.163 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x0 length 0x80000 00:06:31.163 Nvme2n1 : 5.08 1525.77 5.96 0.00 0.00 83186.47 8620.50 83079.48 00:06:31.163 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x80000 length 0x80000 00:06:31.163 Nvme2n1 : 5.09 1508.90 5.89 0.00 0.00 83388.85 15829.46 77836.60 00:06:31.163 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x0 length 0x80000 00:06:31.163 Nvme2n2 : 5.08 1524.70 5.96 0.00 0.00 83041.07 11040.30 76626.71 00:06:31.163 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x80000 length 0x80000 00:06:31.163 Nvme2n2 : 5.07 1513.32 5.91 0.00 0.00 84354.75 16736.89 100421.32 00:06:31.163 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x0 length 0x80000 00:06:31.163 Nvme2n3 : 5.10 1531.84 5.98 0.00 0.00 82602.52 12401.43 70980.53 00:06:31.163 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x80000 length 0x80000 00:06:31.163 Nvme2n3 : 5.08 1512.17 5.91 0.00 0.00 84269.78 17946.78 91952.05 00:06:31.163 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x0 length 0x20000 00:06:31.163 Nvme3n1 : 5.10 1531.44 5.98 0.00 0.00 82180.94 12754.31 73803.62 00:06:31.163 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:31.163 Verification LBA range: start 0x20000 length 0x20000 00:06:31.163 Nvme3n1 : 5.08 1510.71 5.90 0.00 0.00 83932.27 16131.94 71787.13 00:06:31.163 [2024-11-20T09:00:10.082Z] =================================================================================================================== 00:06:31.163 [2024-11-20T09:00:10.082Z] Total : 18215.79 71.16 0.00 0.00 83501.50 8620.50 104051.00 00:06:32.550 00:06:32.550 real 0m7.388s 00:06:32.550 user 0m13.733s 00:06:32.550 sys 0m0.249s 00:06:32.550 09:00:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.550 ************************************ 00:06:32.550 END TEST bdev_verify 00:06:32.550 ************************************ 00:06:32.550 09:00:11 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:32.550 09:00:11 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:32.550 09:00:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:32.550 09:00:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.550 09:00:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:32.550 ************************************ 00:06:32.550 START TEST bdev_verify_big_io 00:06:32.550 ************************************ 00:06:32.550 09:00:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:32.550 [2024-11-20 09:00:11.437033] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:32.550 [2024-11-20 09:00:11.437200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60548 ] 00:06:32.811 [2024-11-20 09:00:11.602546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.073 [2024-11-20 09:00:11.750001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.073 [2024-11-20 09:00:11.750046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.642 Running I/O for 5 seconds... 00:06:37.587 0.00 IOPS, 0.00 MiB/s [2024-11-20T09:00:18.417Z] 1523.00 IOPS, 95.19 MiB/s [2024-11-20T09:00:18.417Z] 2045.67 IOPS, 127.85 MiB/s 00:06:39.498 Latency(us) 00:06:39.498 [2024-11-20T09:00:18.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:39.498 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x0 length 0xbd0b 00:06:39.498 Nvme0n1 : 5.69 117.00 7.31 0.00 0.00 1039544.85 23290.49 1019538.51 00:06:39.498 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:39.498 Nvme0n1 : 5.75 122.48 7.65 0.00 0.00 1003028.34 31658.93 1019538.51 00:06:39.498 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x0 length 0xa000 00:06:39.498 Nvme1n1 : 5.76 122.29 7.64 0.00 0.00 981517.61 60091.47 877577.45 00:06:39.498 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0xa000 length 0xa000 00:06:39.498 Nvme1n1 : 5.75 122.43 7.65 0.00 0.00 978396.05 101631.21 838860.80 00:06:39.498 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x0 length 0x8000 00:06:39.498 Nvme2n1 : 5.81 127.79 7.99 0.00 0.00 916769.53 16938.54 884030.23 00:06:39.498 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x8000 length 0x8000 00:06:39.498 Nvme2n1 : 5.81 128.26 8.02 0.00 0.00 915131.10 14417.92 851766.35 00:06:39.498 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x0 length 0x8000 00:06:39.498 Nvme2n2 : 5.82 127.25 7.95 0.00 0.00 890304.79 17140.18 884030.23 00:06:39.498 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x8000 length 0x8000 00:06:39.498 Nvme2n2 : 5.81 132.17 8.26 0.00 0.00 866715.04 41338.09 864671.90 00:06:39.498 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x0 length 0x8000 00:06:39.498 Nvme2n3 : 5.82 132.01 8.25 0.00 0.00 835375.26 38111.70 903388.55 00:06:39.498 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x8000 length 0x8000 00:06:39.498 Nvme2n3 : 5.81 132.11 8.26 0.00 0.00 839273.94 42951.29 896935.78 00:06:39.498 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x0 length 0x2000 00:06:39.498 Nvme3n1 : 5.92 152.78 9.55 0.00 0.00 703328.04 1083.86 1109877.37 00:06:39.498 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:39.498 Verification LBA range: start 0x2000 length 0x2000 00:06:39.498 Nvme3n1 : 5.90 151.74 9.48 0.00 0.00 711977.17 315.08 916294.10 00:06:39.498 [2024-11-20T09:00:18.417Z] =================================================================================================================== 00:06:39.498 [2024-11-20T09:00:18.417Z] Total : 1568.32 98.02 0.00 0.00 880956.51 315.08 1109877.37 00:06:41.409 00:06:41.409 real 0m8.497s 00:06:41.409 user 0m15.872s 00:06:41.409 sys 0m0.314s 00:06:41.409 ************************************ 00:06:41.409 END TEST bdev_verify_big_io 00:06:41.409 ************************************ 00:06:41.409 09:00:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.409 09:00:19 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:41.409 09:00:19 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.409 09:00:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:41.409 09:00:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.409 09:00:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:41.409 ************************************ 00:06:41.409 START TEST bdev_write_zeroes 00:06:41.409 ************************************ 00:06:41.409 09:00:19 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:41.409 [2024-11-20 09:00:20.017815] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:41.409 [2024-11-20 09:00:20.018033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60660 ] 00:06:41.409 [2024-11-20 09:00:20.194359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.409 [2024-11-20 09:00:20.311299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.348 Running I/O for 1 seconds... 00:06:43.287 45312.00 IOPS, 177.00 MiB/s 00:06:43.287 Latency(us) 00:06:43.287 [2024-11-20T09:00:22.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.288 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.288 Nvme0n1 : 1.03 7536.53 29.44 0.00 0.00 16938.12 7057.72 30045.74 00:06:43.288 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.288 Nvme1n1 : 1.03 7527.64 29.40 0.00 0.00 16946.44 12149.37 30045.74 00:06:43.288 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.288 Nvme2n1 : 1.03 7518.91 29.37 0.00 0.00 16910.70 11695.66 30045.74 00:06:43.288 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.288 Nvme2n2 : 1.03 7510.26 29.34 0.00 0.00 16806.87 11645.24 25306.98 00:06:43.288 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.288 Nvme2n3 : 1.03 7501.67 29.30 0.00 0.00 16758.29 10586.58 24903.68 00:06:43.288 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:43.288 Nvme3n1 : 1.03 7493.12 29.27 0.00 0.00 16740.02 9880.81 26617.70 00:06:43.288 [2024-11-20T09:00:22.207Z] =================================================================================================================== 00:06:43.288 [2024-11-20T09:00:22.207Z] Total : 45088.14 176.13 0.00 0.00 16850.07 7057.72 30045.74 00:06:43.857 00:06:43.857 real 0m2.768s 00:06:43.857 user 0m2.409s 00:06:43.857 sys 0m0.238s 00:06:43.857 09:00:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.857 09:00:22 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:43.857 ************************************ 00:06:43.857 END TEST bdev_write_zeroes 00:06:43.857 ************************************ 00:06:43.857 09:00:22 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.857 09:00:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:43.857 09:00:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.857 09:00:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.857 ************************************ 00:06:43.857 START TEST bdev_json_nonenclosed 00:06:43.857 ************************************ 00:06:43.857 09:00:22 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.116 [2024-11-20 09:00:22.825677] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:44.116 [2024-11-20 09:00:22.825806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60713 ] 00:06:44.116 [2024-11-20 09:00:22.984649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.375 [2024-11-20 09:00:23.089228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.375 [2024-11-20 09:00:23.089318] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:44.375 [2024-11-20 09:00:23.089334] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:44.375 [2024-11-20 09:00:23.089366] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.375 00:06:44.375 real 0m0.510s 00:06:44.375 user 0m0.322s 00:06:44.375 sys 0m0.084s 00:06:44.375 09:00:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.375 09:00:23 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:44.375 ************************************ 00:06:44.375 END TEST bdev_json_nonenclosed 00:06:44.375 ************************************ 00:06:44.634 09:00:23 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.634 09:00:23 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:44.634 09:00:23 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.634 09:00:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:44.634 ************************************ 00:06:44.634 START TEST bdev_json_nonarray 00:06:44.634 ************************************ 00:06:44.634 09:00:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:44.635 [2024-11-20 09:00:23.393983] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:44.635 [2024-11-20 09:00:23.394109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60743 ] 00:06:44.896 [2024-11-20 09:00:23.555650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.896 [2024-11-20 09:00:23.660500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.896 [2024-11-20 09:00:23.660587] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:44.896 [2024-11-20 09:00:23.660604] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:44.896 [2024-11-20 09:00:23.660613] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.157 00:06:45.157 real 0m0.513s 00:06:45.157 user 0m0.314s 00:06:45.157 sys 0m0.094s 00:06:45.157 09:00:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.157 ************************************ 00:06:45.157 END TEST bdev_json_nonarray 00:06:45.157 09:00:23 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:45.157 ************************************ 00:06:45.157 09:00:23 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:45.158 09:00:23 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:45.158 00:06:45.158 real 0m40.198s 00:06:45.158 user 1m0.310s 00:06:45.158 sys 0m6.487s 00:06:45.158 09:00:23 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.158 ************************************ 00:06:45.158 END TEST blockdev_nvme 00:06:45.158 ************************************ 00:06:45.158 09:00:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.158 09:00:23 -- spdk/autotest.sh@209 -- # uname -s 00:06:45.158 09:00:23 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:45.158 09:00:23 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:45.158 09:00:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.158 09:00:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.158 09:00:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.158 ************************************ 00:06:45.158 START TEST blockdev_nvme_gpt 00:06:45.158 ************************************ 00:06:45.158 09:00:23 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:45.158 * Looking for test storage... 00:06:45.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:45.158 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.158 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.158 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.419 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.419 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.420 09:00:24 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.420 --rc genhtml_branch_coverage=1 00:06:45.420 --rc genhtml_function_coverage=1 00:06:45.420 --rc genhtml_legend=1 00:06:45.420 --rc geninfo_all_blocks=1 00:06:45.420 --rc geninfo_unexecuted_blocks=1 00:06:45.420 00:06:45.420 ' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.420 --rc genhtml_branch_coverage=1 00:06:45.420 --rc genhtml_function_coverage=1 00:06:45.420 --rc genhtml_legend=1 00:06:45.420 --rc geninfo_all_blocks=1 00:06:45.420 --rc geninfo_unexecuted_blocks=1 00:06:45.420 00:06:45.420 ' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.420 --rc genhtml_branch_coverage=1 00:06:45.420 --rc genhtml_function_coverage=1 00:06:45.420 --rc genhtml_legend=1 00:06:45.420 --rc geninfo_all_blocks=1 00:06:45.420 --rc geninfo_unexecuted_blocks=1 00:06:45.420 00:06:45.420 ' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.420 --rc genhtml_branch_coverage=1 00:06:45.420 --rc genhtml_function_coverage=1 00:06:45.420 --rc genhtml_legend=1 00:06:45.420 --rc geninfo_all_blocks=1 00:06:45.420 --rc geninfo_unexecuted_blocks=1 00:06:45.420 00:06:45.420 ' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60817 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60817 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60817 ']' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.420 09:00:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:45.420 09:00:24 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:45.420 [2024-11-20 09:00:24.202556] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:45.420 [2024-11-20 09:00:24.202676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60817 ] 00:06:45.682 [2024-11-20 09:00:24.364436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.682 [2024-11-20 09:00:24.468946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.253 09:00:25 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.253 09:00:25 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:46.253 09:00:25 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:46.253 09:00:25 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:06:46.253 09:00:25 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:46.515 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:46.773 Waiting for block devices as requested 00:06:46.773 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:46.773 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.033 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:47.033 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:52.409 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:52.409 BYT; 00:06:52.409 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:52.409 BYT; 00:06:52.409 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:52.409 09:00:30 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:52.409 09:00:30 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:52.409 09:00:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:52.409 09:00:31 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:52.409 09:00:31 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:52.409 09:00:31 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:52.409 09:00:31 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:53.353 The operation has completed successfully. 00:06:53.353 09:00:32 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:54.293 The operation has completed successfully. 00:06:54.293 09:00:33 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:54.864 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:55.126 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.126 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.386 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.386 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:55.386 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:55.386 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.386 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.386 [] 00:06:55.386 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.386 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:55.386 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:55.386 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:55.386 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:55.386 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:55.386 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.386 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.647 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.647 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:06:55.647 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.647 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.647 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.910 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:55.910 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.910 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.910 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.910 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:55.910 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:55.910 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.910 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:55.910 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:55.910 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.910 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:55.910 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:55.911 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3164837c-7e15-4ce3-8f5e-b31906012734"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3164837c-7e15-4ce3-8f5e-b31906012734",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "59b9469d-d6b2-4edd-87c1-9f57d33ffa22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "59b9469d-d6b2-4edd-87c1-9f57d33ffa22",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4659fb7d-abce-4283-b359-0a23b7b79dfb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4659fb7d-abce-4283-b359-0a23b7b79dfb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "243d1a08-8808-4985-8fa0-2dd33f01b231"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "243d1a08-8808-4985-8fa0-2dd33f01b231",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5caf313e-dc40-4e64-a732-3a264b0f1a54"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5caf313e-dc40-4e64-a732-3a264b0f1a54",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:55.911 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:55.911 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:55.911 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:55.911 09:00:34 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60817 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60817 ']' 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60817 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60817 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.911 killing process with pid 60817 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60817' 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60817 00:06:55.911 09:00:34 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60817 00:06:57.316 09:00:36 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.316 09:00:36 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.316 09:00:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:57.316 09:00:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.316 09:00:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.316 ************************************ 00:06:57.316 START TEST bdev_hello_world 00:06:57.316 ************************************ 00:06:57.316 09:00:36 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.578 [2024-11-20 09:00:36.287210] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:57.578 [2024-11-20 09:00:36.287342] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61443 ] 00:06:57.578 [2024-11-20 09:00:36.448293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.840 [2024-11-20 09:00:36.551736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.412 [2024-11-20 09:00:37.097655] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.412 [2024-11-20 09:00:37.097712] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.412 [2024-11-20 09:00:37.097741] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.412 [2024-11-20 09:00:37.100291] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.412 [2024-11-20 09:00:37.101092] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.412 [2024-11-20 09:00:37.101120] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.412 [2024-11-20 09:00:37.101649] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.412 00:06:58.412 [2024-11-20 09:00:37.101680] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:58.985 00:06:58.985 real 0m1.595s 00:06:58.985 user 0m1.308s 00:06:58.985 sys 0m0.179s 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:58.985 ************************************ 00:06:58.985 END TEST bdev_hello_world 00:06:58.985 ************************************ 00:06:58.985 09:00:37 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:58.985 09:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:58.985 09:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.985 09:00:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.985 ************************************ 00:06:58.985 START TEST bdev_bounds 00:06:58.985 ************************************ 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61480 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:58.985 Process bdevio pid: 61480 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61480' 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61480 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61480 ']' 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.985 09:00:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.247 [2024-11-20 09:00:37.950390] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:06:59.247 [2024-11-20 09:00:37.950846] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:06:59.247 [2024-11-20 09:00:38.120550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.508 [2024-11-20 09:00:38.225279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.508 [2024-11-20 09:00:38.225567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.508 [2024-11-20 09:00:38.225589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.080 09:00:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.080 09:00:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:00.080 09:00:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:00.341 I/O targets: 00:07:00.341 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:00.341 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:00.341 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:00.341 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.341 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.341 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:00.341 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:00.341 00:07:00.341 00:07:00.341 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.341 http://cunit.sourceforge.net/ 00:07:00.341 00:07:00.341 00:07:00.341 Suite: bdevio tests on: Nvme3n1 00:07:00.341 Test: blockdev write read block ...passed 00:07:00.341 Test: blockdev write zeroes read block ...passed 00:07:00.341 Test: blockdev write zeroes read no split ...passed 00:07:00.341 Test: blockdev write zeroes read split ...passed 00:07:00.341 Test: blockdev write zeroes read split partial ...passed 00:07:00.341 Test: blockdev reset ...[2024-11-20 09:00:39.075233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:00.341 [2024-11-20 09:00:39.080022] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:00.341 passed 00:07:00.341 Test: blockdev write read 8 blocks ...passed 00:07:00.341 Test: blockdev write read size > 128k ...passed 00:07:00.341 Test: blockdev write read invalid size ...passed 00:07:00.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.341 Test: blockdev write read max offset ...passed 00:07:00.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.341 Test: blockdev writev readv 8 blocks ...passed 00:07:00.341 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.341 Test: blockdev writev readv block ...passed 00:07:00.341 Test: blockdev writev readv size > 128k ...passed 00:07:00.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.341 Test: blockdev comparev and writev ...[2024-11-20 09:00:39.102081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29b004000 len:0x1000 00:07:00.341 [2024-11-20 09:00:39.102144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.341 passed 00:07:00.341 Test: blockdev nvme passthru rw ...passed 00:07:00.341 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.341 Test: blockdev nvme admin passthru ...[2024-11-20 09:00:39.106306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.341 [2024-11-20 09:00:39.106359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.341 passed 00:07:00.341 Test: blockdev copy ...passed 00:07:00.341 Suite: bdevio tests on: Nvme2n3 00:07:00.341 Test: blockdev write read block ...passed 00:07:00.341 Test: blockdev write zeroes read block ...passed 00:07:00.341 Test: blockdev write zeroes read no split ...passed 00:07:00.341 Test: blockdev write zeroes read split ...passed 00:07:00.341 Test: blockdev write zeroes read split partial ...passed 00:07:00.341 Test: blockdev reset ...[2024-11-20 09:00:39.171552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.341 [2024-11-20 09:00:39.175083] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.341 passed 00:07:00.341 Test: blockdev write read 8 blocks ...passed 00:07:00.341 Test: blockdev write read size > 128k ...passed 00:07:00.341 Test: blockdev write read invalid size ...passed 00:07:00.341 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.341 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.341 Test: blockdev write read max offset ...passed 00:07:00.341 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.341 Test: blockdev writev readv 8 blocks ...passed 00:07:00.341 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.341 Test: blockdev writev readv block ...passed 00:07:00.341 Test: blockdev writev readv size > 128k ...passed 00:07:00.341 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.341 Test: blockdev comparev and writev ...[2024-11-20 09:00:39.189300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29b002000 len:0x1000 00:07:00.341 [2024-11-20 09:00:39.189361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.341 passed 00:07:00.341 Test: blockdev nvme passthru rw ...passed 00:07:00.341 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.341 Test: blockdev nvme admin passthru ...[2024-11-20 09:00:39.190713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.342 [2024-11-20 09:00:39.190751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.342 passed 00:07:00.342 Test: blockdev copy ...passed 00:07:00.342 Suite: bdevio tests on: Nvme2n2 00:07:00.342 Test: blockdev write read block ...passed 00:07:00.342 Test: blockdev write zeroes read block ...passed 00:07:00.342 Test: blockdev write zeroes read no split ...passed 00:07:00.342 Test: blockdev write zeroes read split ...passed 00:07:00.342 Test: blockdev write zeroes read split partial ...passed 00:07:00.342 Test: blockdev reset ...[2024-11-20 09:00:39.249164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.342 [2024-11-20 09:00:39.254137] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.342 passed 00:07:00.342 Test: blockdev write read 8 blocks ...passed 00:07:00.603 Test: blockdev write read size > 128k ...passed 00:07:00.603 Test: blockdev write read invalid size ...passed 00:07:00.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.603 Test: blockdev write read max offset ...passed 00:07:00.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.603 Test: blockdev writev readv 8 blocks ...passed 00:07:00.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.603 Test: blockdev writev readv block ...passed 00:07:00.603 Test: blockdev writev readv size > 128k ...passed 00:07:00.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.603 Test: blockdev comparev and writev ...[2024-11-20 09:00:39.274060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfe38000 len:0x1000 00:07:00.603 [2024-11-20 09:00:39.274116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.603 passed 00:07:00.603 Test: blockdev nvme passthru rw ...passed 00:07:00.603 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.603 Test: blockdev nvme admin passthru ...[2024-11-20 09:00:39.276150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.603 [2024-11-20 09:00:39.276183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.603 passed 00:07:00.603 Test: blockdev copy ...passed 00:07:00.603 Suite: bdevio tests on: Nvme2n1 00:07:00.603 Test: blockdev write read block ...passed 00:07:00.603 Test: blockdev write zeroes read block ...passed 00:07:00.603 Test: blockdev write zeroes read no split ...passed 00:07:00.603 Test: blockdev write zeroes read split ...passed 00:07:00.603 Test: blockdev write zeroes read split partial ...passed 00:07:00.603 Test: blockdev reset ...[2024-11-20 09:00:39.335253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:00.603 [2024-11-20 09:00:39.338600] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:00.603 passed 00:07:00.603 Test: blockdev write read 8 blocks ...passed 00:07:00.603 Test: blockdev write read size > 128k ...passed 00:07:00.603 Test: blockdev write read invalid size ...passed 00:07:00.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.603 Test: blockdev write read max offset ...passed 00:07:00.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.603 Test: blockdev writev readv 8 blocks ...passed 00:07:00.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.603 Test: blockdev writev readv block ...passed 00:07:00.603 Test: blockdev writev readv size > 128k ...passed 00:07:00.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.603 Test: blockdev comparev and writev ...[2024-11-20 09:00:39.357416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfe34000 len:0x1000 00:07:00.603 [2024-11-20 09:00:39.357478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.603 passed 00:07:00.603 Test: blockdev nvme passthru rw ...passed 00:07:00.603 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:00:39.359533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:00.603 [2024-11-20 09:00:39.359574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:00.603 passed 00:07:00.603 Test: blockdev nvme admin passthru ...passed 00:07:00.603 Test: blockdev copy ...passed 00:07:00.603 Suite: bdevio tests on: Nvme1n1p2 00:07:00.603 Test: blockdev write read block ...passed 00:07:00.603 Test: blockdev write zeroes read block ...passed 00:07:00.603 Test: blockdev write zeroes read no split ...passed 00:07:00.603 Test: blockdev write zeroes read split ...passed 00:07:00.603 Test: blockdev write zeroes read split partial ...passed 00:07:00.603 Test: blockdev reset ...[2024-11-20 09:00:39.421068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.603 [2024-11-20 09:00:39.423984] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:00.603 passed 00:07:00.603 Test: blockdev write read 8 blocks ...passed 00:07:00.603 Test: blockdev write read size > 128k ...passed 00:07:00.603 Test: blockdev write read invalid size ...passed 00:07:00.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.603 Test: blockdev write read max offset ...passed 00:07:00.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.603 Test: blockdev writev readv 8 blocks ...passed 00:07:00.603 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.603 Test: blockdev writev readv block ...passed 00:07:00.603 Test: blockdev writev readv size > 128k ...passed 00:07:00.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.603 Test: blockdev comparev and writev ...[2024-11-20 09:00:39.444046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfe30000 len:0x1000 00:07:00.604 [2024-11-20 09:00:39.444104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.604 passed 00:07:00.604 Test: blockdev nvme passthru rw ...passed 00:07:00.604 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.604 Test: blockdev nvme admin passthru ...passed 00:07:00.604 Test: blockdev copy ...passed 00:07:00.604 Suite: bdevio tests on: Nvme1n1p1 00:07:00.604 Test: blockdev write read block ...passed 00:07:00.604 Test: blockdev write zeroes read block ...passed 00:07:00.604 Test: blockdev write zeroes read no split ...passed 00:07:00.604 Test: blockdev write zeroes read split ...passed 00:07:00.604 Test: blockdev write zeroes read split partial ...passed 00:07:00.604 Test: blockdev reset ...[2024-11-20 09:00:39.501547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:00.604 [2024-11-20 09:00:39.504110] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:00.604 passed 00:07:00.604 Test: blockdev write read 8 blocks ...passed 00:07:00.604 Test: blockdev write read size > 128k ...passed 00:07:00.604 Test: blockdev write read invalid size ...passed 00:07:00.604 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.604 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.604 Test: blockdev write read max offset ...passed 00:07:00.604 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.604 Test: blockdev writev readv 8 blocks ...passed 00:07:00.604 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.604 Test: blockdev writev readv block ...passed 00:07:00.604 Test: blockdev writev readv size > 128k ...passed 00:07:00.604 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.604 Test: blockdev comparev and writev ...[2024-11-20 09:00:39.518197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x29ba0e000 len:0x1000 00:07:00.604 [2024-11-20 09:00:39.518248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:00.604 passed 00:07:00.604 Test: blockdev nvme passthru rw ...passed 00:07:00.604 Test: blockdev nvme passthru vendor specific ...passed 00:07:00.604 Test: blockdev nvme admin passthru ...passed 00:07:00.864 Test: blockdev copy ...passed 00:07:00.864 Suite: bdevio tests on: Nvme0n1 00:07:00.864 Test: blockdev write read block ...passed 00:07:00.864 Test: blockdev write zeroes read block ...passed 00:07:00.864 Test: blockdev write zeroes read no split ...passed 00:07:00.864 Test: blockdev write zeroes read split ...passed 00:07:00.864 Test: blockdev write zeroes read split partial ...passed 00:07:00.864 Test: blockdev reset ...[2024-11-20 09:00:39.572216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:00.864 [2024-11-20 09:00:39.574768] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:00.864 passed 00:07:00.864 Test: blockdev write read 8 blocks ...passed 00:07:00.864 Test: blockdev write read size > 128k ...passed 00:07:00.864 Test: blockdev write read invalid size ...passed 00:07:00.864 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:00.864 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:00.864 Test: blockdev write read max offset ...passed 00:07:00.864 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:00.864 Test: blockdev writev readv 8 blocks ...passed 00:07:00.864 Test: blockdev writev readv 30 x 1block ...passed 00:07:00.864 Test: blockdev writev readv block ...passed 00:07:00.864 Test: blockdev writev readv size > 128k ...passed 00:07:00.864 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:00.864 Test: blockdev comparev and writev ...passed 00:07:00.864 Test: blockdev nvme passthru rw ...[2024-11-20 09:00:39.590620] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:00.864 separate metadata which is not supported yet. 00:07:00.864 passed 00:07:00.864 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:00:39.591572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:00.864 [2024-11-20 09:00:39.591612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:00.864 passed 00:07:00.864 Test: blockdev nvme admin passthru ...passed 00:07:00.864 Test: blockdev copy ...passed 00:07:00.864 00:07:00.864 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.864 suites 7 7 n/a 0 0 00:07:00.864 tests 161 161 161 0 0 00:07:00.864 asserts 1025 1025 1025 0 n/a 00:07:00.864 00:07:00.864 Elapsed time = 1.459 seconds 00:07:00.864 0 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61480 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61480 ']' 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61480 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61480 00:07:00.864 killing process with pid 61480 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61480' 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61480 00:07:00.864 09:00:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61480 00:07:01.434 ************************************ 00:07:01.434 END TEST bdev_bounds 00:07:01.434 ************************************ 00:07:01.434 09:00:40 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:01.434 00:07:01.434 real 0m2.456s 00:07:01.434 user 0m6.329s 00:07:01.434 sys 0m0.321s 00:07:01.434 09:00:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.434 09:00:40 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.695 09:00:40 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.695 09:00:40 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:01.695 09:00:40 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.695 09:00:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.695 ************************************ 00:07:01.695 START TEST bdev_nbd 00:07:01.695 ************************************ 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61539 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61539 /var/tmp/spdk-nbd.sock 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61539 ']' 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.695 09:00:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:01.695 [2024-11-20 09:00:40.473076] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:01.695 [2024-11-20 09:00:40.473208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.958 [2024-11-20 09:00:40.631654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.958 [2024-11-20 09:00:40.744045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.526 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:02.786 1+0 records in 00:07:02.786 1+0 records out 00:07:02.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00154377 s, 2.7 MB/s 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:02.786 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.046 1+0 records in 00:07:03.046 1+0 records out 00:07:03.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011546 s, 3.5 MB/s 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.046 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.047 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.047 09:00:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.047 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.047 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.047 09:00:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.307 1+0 records in 00:07:03.307 1+0 records out 00:07:03.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742801 s, 5.5 MB/s 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.307 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:03.568 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:03.568 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.569 1+0 records in 00:07:03.569 1+0 records out 00:07:03.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00142249 s, 2.9 MB/s 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.569 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:03.829 1+0 records in 00:07:03.829 1+0 records out 00:07:03.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000982182 s, 4.2 MB/s 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:03.829 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.089 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.090 1+0 records in 00:07:04.090 1+0 records out 00:07:04.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000953969 s, 4.3 MB/s 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.090 09:00:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.351 1+0 records in 00:07:04.351 1+0 records out 00:07:04.351 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105299 s, 3.9 MB/s 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.351 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd0", 00:07:04.611 "bdev_name": "Nvme0n1" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd1", 00:07:04.611 "bdev_name": "Nvme1n1p1" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd2", 00:07:04.611 "bdev_name": "Nvme1n1p2" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd3", 00:07:04.611 "bdev_name": "Nvme2n1" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd4", 00:07:04.611 "bdev_name": "Nvme2n2" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd5", 00:07:04.611 "bdev_name": "Nvme2n3" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd6", 00:07:04.611 "bdev_name": "Nvme3n1" 00:07:04.611 } 00:07:04.611 ]' 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd0", 00:07:04.611 "bdev_name": "Nvme0n1" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd1", 00:07:04.611 "bdev_name": "Nvme1n1p1" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd2", 00:07:04.611 "bdev_name": "Nvme1n1p2" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd3", 00:07:04.611 "bdev_name": "Nvme2n1" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd4", 00:07:04.611 "bdev_name": "Nvme2n2" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd5", 00:07:04.611 "bdev_name": "Nvme2n3" 00:07:04.611 }, 00:07:04.611 { 00:07:04.611 "nbd_device": "/dev/nbd6", 00:07:04.611 "bdev_name": "Nvme3n1" 00:07:04.611 } 00:07:04.611 ]' 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.611 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.873 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.134 09:00:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.134 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.395 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.966 09:00:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.225 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.485 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:06.742 /dev/nbd0 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:06.742 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:06.742 1+0 records in 00:07:06.742 1+0 records out 00:07:06.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861614 s, 4.8 MB/s 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:06.743 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:07.000 /dev/nbd1 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.000 1+0 records in 00:07:07.000 1+0 records out 00:07:07.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00150699 s, 2.7 MB/s 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.000 09:00:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:07.261 /dev/nbd10 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.261 1+0 records in 00:07:07.261 1+0 records out 00:07:07.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000880506 s, 4.7 MB/s 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.261 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:07.576 /dev/nbd11 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.576 1+0 records in 00:07:07.576 1+0 records out 00:07:07.576 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105942 s, 3.9 MB/s 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.576 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:07.837 /dev/nbd12 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.837 1+0 records in 00:07:07.837 1+0 records out 00:07:07.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619943 s, 6.6 MB/s 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.837 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:08.096 /dev/nbd13 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.096 1+0 records in 00:07:08.096 1+0 records out 00:07:08.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119275 s, 3.4 MB/s 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.096 09:00:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:08.358 /dev/nbd14 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.358 1+0 records in 00:07:08.358 1+0 records out 00:07:08.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831685 s, 4.9 MB/s 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.358 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd0", 00:07:08.618 "bdev_name": "Nvme0n1" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd1", 00:07:08.618 "bdev_name": "Nvme1n1p1" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd10", 00:07:08.618 "bdev_name": "Nvme1n1p2" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd11", 00:07:08.618 "bdev_name": "Nvme2n1" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd12", 00:07:08.618 "bdev_name": "Nvme2n2" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd13", 00:07:08.618 "bdev_name": "Nvme2n3" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd14", 00:07:08.618 "bdev_name": "Nvme3n1" 00:07:08.618 } 00:07:08.618 ]' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd0", 00:07:08.618 "bdev_name": "Nvme0n1" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd1", 00:07:08.618 "bdev_name": "Nvme1n1p1" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd10", 00:07:08.618 "bdev_name": "Nvme1n1p2" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd11", 00:07:08.618 "bdev_name": "Nvme2n1" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd12", 00:07:08.618 "bdev_name": "Nvme2n2" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd13", 00:07:08.618 "bdev_name": "Nvme2n3" 00:07:08.618 }, 00:07:08.618 { 00:07:08.618 "nbd_device": "/dev/nbd14", 00:07:08.618 "bdev_name": "Nvme3n1" 00:07:08.618 } 00:07:08.618 ]' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.618 /dev/nbd1 00:07:08.618 /dev/nbd10 00:07:08.618 /dev/nbd11 00:07:08.618 /dev/nbd12 00:07:08.618 /dev/nbd13 00:07:08.618 /dev/nbd14' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.618 /dev/nbd1 00:07:08.618 /dev/nbd10 00:07:08.618 /dev/nbd11 00:07:08.618 /dev/nbd12 00:07:08.618 /dev/nbd13 00:07:08.618 /dev/nbd14' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:08.618 256+0 records in 00:07:08.618 256+0 records out 00:07:08.618 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501836 s, 209 MB/s 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.618 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.190 256+0 records in 00:07:09.190 256+0 records out 00:07:09.190 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.522714 s, 2.0 MB/s 00:07:09.190 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.190 09:00:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.451 256+0 records in 00:07:09.451 256+0 records out 00:07:09.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.300499 s, 3.5 MB/s 00:07:09.451 09:00:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.451 09:00:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:10.021 256+0 records in 00:07:10.021 256+0 records out 00:07:10.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.455747 s, 2.3 MB/s 00:07:10.021 09:00:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.021 09:00:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:10.280 256+0 records in 00:07:10.280 256+0 records out 00:07:10.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.291513 s, 3.6 MB/s 00:07:10.280 09:00:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.280 09:00:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.540 256+0 records in 00:07:10.540 256+0 records out 00:07:10.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246782 s, 4.2 MB/s 00:07:10.540 09:00:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.540 09:00:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.800 256+0 records in 00:07:10.800 256+0 records out 00:07:10.800 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.285356 s, 3.7 MB/s 00:07:10.800 09:00:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.800 09:00:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:11.373 256+0 records in 00:07:11.373 256+0 records out 00:07:11.373 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.560769 s, 1.9 MB/s 00:07:11.373 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:11.373 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.373 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.373 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.373 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.374 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.635 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.895 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.153 09:00:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.412 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.672 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.931 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.190 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.191 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.191 09:00:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.191 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.191 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.191 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:13.451 malloc_lvol_verify 00:07:13.451 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:13.711 8980fea6-dae2-4a92-9414-57b96a833b13 00:07:13.712 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:13.972 1c405643-2106-4f20-9bea-44e7ed2b6c06 00:07:13.972 09:00:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:14.232 /dev/nbd0 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:14.232 mke2fs 1.47.0 (5-Feb-2023) 00:07:14.232 Discarding device blocks: 0/4096 done 00:07:14.232 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:14.232 00:07:14.232 Allocating group tables: 0/1 done 00:07:14.232 Writing inode tables: 0/1 done 00:07:14.232 Creating journal (1024 blocks): done 00:07:14.232 Writing superblocks and filesystem accounting information: 0/1 done 00:07:14.232 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.232 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61539 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61539 ']' 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61539 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61539 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.564 killing process with pid 61539 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61539' 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61539 00:07:14.564 09:00:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61539 00:07:16.041 09:00:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:16.041 00:07:16.041 real 0m14.366s 00:07:16.041 user 0m18.494s 00:07:16.041 sys 0m4.709s 00:07:16.041 09:00:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.041 09:00:54 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:16.041 ************************************ 00:07:16.042 END TEST bdev_nbd 00:07:16.042 ************************************ 00:07:16.042 09:00:54 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:16.042 09:00:54 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:16.042 09:00:54 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:16.042 skipping fio tests on NVMe due to multi-ns failures. 00:07:16.042 09:00:54 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:16.042 09:00:54 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:16.042 09:00:54 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.042 09:00:54 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:16.042 09:00:54 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.042 09:00:54 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.042 ************************************ 00:07:16.042 START TEST bdev_verify 00:07:16.042 ************************************ 00:07:16.042 09:00:54 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:16.042 [2024-11-20 09:00:54.907884] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:16.042 [2024-11-20 09:00:54.908015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61984 ] 00:07:16.301 [2024-11-20 09:00:55.069431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:16.301 [2024-11-20 09:00:55.174950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.301 [2024-11-20 09:00:55.174968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.869 Running I/O for 5 seconds... 00:07:19.190 17216.00 IOPS, 67.25 MiB/s [2024-11-20T09:00:59.086Z] 17920.00 IOPS, 70.00 MiB/s [2024-11-20T09:01:00.473Z] 17877.33 IOPS, 69.83 MiB/s [2024-11-20T09:01:01.043Z] 18272.00 IOPS, 71.38 MiB/s [2024-11-20T09:01:01.043Z] 18112.00 IOPS, 70.75 MiB/s 00:07:22.124 Latency(us) 00:07:22.124 [2024-11-20T09:01:01.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:22.124 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.124 Verification LBA range: start 0x0 length 0xbd0bd 00:07:22.124 Nvme0n1 : 5.05 1242.21 4.85 0.00 0.00 102560.83 21878.94 92758.65 00:07:22.124 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.124 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:22.125 Nvme0n1 : 5.06 1290.01 5.04 0.00 0.00 98749.36 20064.10 104051.00 00:07:22.125 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x0 length 0x4ff80 00:07:22.125 Nvme1n1p1 : 5.09 1245.47 4.87 0.00 0.00 102030.36 11947.72 87515.77 00:07:22.125 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:22.125 Nvme1n1p1 : 5.06 1289.43 5.04 0.00 0.00 98574.11 23088.84 96791.63 00:07:22.125 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x0 length 0x4ff7f 00:07:22.125 Nvme1n1p2 : 5.09 1244.92 4.86 0.00 0.00 101796.82 12250.19 84289.38 00:07:22.125 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:22.125 Nvme1n1p2 : 5.09 1294.84 5.06 0.00 0.00 98089.33 10233.70 91548.75 00:07:22.125 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x0 length 0x80000 00:07:22.125 Nvme2n1 : 5.11 1253.06 4.89 0.00 0.00 101238.23 14518.74 81869.59 00:07:22.125 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x80000 length 0x80000 00:07:22.125 Nvme2n1 : 5.09 1294.29 5.06 0.00 0.00 97950.05 10384.94 87112.47 00:07:22.125 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x0 length 0x80000 00:07:22.125 Nvme2n2 : 5.11 1252.42 4.89 0.00 0.00 101044.57 15526.99 84289.38 00:07:22.125 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x80000 length 0x80000 00:07:22.125 Nvme2n2 : 5.10 1293.69 5.05 0.00 0.00 97769.28 10788.23 81869.59 00:07:22.125 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x0 length 0x80000 00:07:22.125 Nvme2n3 : 5.11 1251.64 4.89 0.00 0.00 100853.19 17039.36 87515.77 00:07:22.125 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x80000 length 0x80000 00:07:22.125 Nvme2n3 : 5.11 1302.96 5.09 0.00 0.00 97093.63 9275.86 84692.68 00:07:22.125 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x0 length 0x20000 00:07:22.125 Nvme3n1 : 5.12 1251.05 4.89 0.00 0.00 100695.00 12603.08 92758.65 00:07:22.125 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:22.125 Verification LBA range: start 0x20000 length 0x20000 00:07:22.125 Nvme3n1 : 5.11 1302.13 5.09 0.00 0.00 96913.95 10989.88 86709.17 00:07:22.125 [2024-11-20T09:01:01.044Z] =================================================================================================================== 00:07:22.125 [2024-11-20T09:01:01.044Z] Total : 17808.11 69.56 0.00 0.00 99632.34 9275.86 104051.00 00:07:23.501 00:07:23.501 real 0m7.201s 00:07:23.501 user 0m13.421s 00:07:23.501 sys 0m0.233s 00:07:23.501 09:01:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.501 ************************************ 00:07:23.501 END TEST bdev_verify 00:07:23.501 ************************************ 00:07:23.501 09:01:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.501 09:01:02 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.501 09:01:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:23.501 09:01:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.501 09:01:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.501 ************************************ 00:07:23.501 START TEST bdev_verify_big_io 00:07:23.501 ************************************ 00:07:23.501 09:01:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:23.501 [2024-11-20 09:01:02.169259] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:23.501 [2024-11-20 09:01:02.169401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62082 ] 00:07:23.501 [2024-11-20 09:01:02.333462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.761 [2024-11-20 09:01:02.439000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.761 [2024-11-20 09:01:02.439260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.332 Running I/O for 5 seconds... 00:07:30.167 112.00 IOPS, 7.00 MiB/s [2024-11-20T09:01:09.657Z] 1587.00 IOPS, 99.19 MiB/s [2024-11-20T09:01:09.657Z] 2660.33 IOPS, 166.27 MiB/s 00:07:30.738 Latency(us) 00:07:30.738 [2024-11-20T09:01:09.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.738 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x0 length 0xbd0b 00:07:30.738 Nvme0n1 : 5.99 72.15 4.51 0.00 0.00 1668923.72 28230.89 2064888.12 00:07:30.738 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:30.738 Nvme0n1 : 5.98 85.55 5.35 0.00 0.00 1422437.32 25508.63 1503496.66 00:07:30.738 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x0 length 0x4ff8 00:07:30.738 Nvme1n1p1 : 6.15 83.24 5.20 0.00 0.00 1383144.37 105664.20 1303460.63 00:07:30.738 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:30.738 Nvme1n1p1 : 6.15 86.70 5.42 0.00 0.00 1351671.82 123409.33 1303460.63 00:07:30.738 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x0 length 0x4ff7 00:07:30.738 Nvme1n1p2 : 6.15 87.76 5.49 0.00 0.00 1293515.59 160512.79 1187310.67 00:07:30.738 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:30.738 Nvme1n1p2 : 6.15 87.32 5.46 0.00 0.00 1294060.09 158899.59 1129235.69 00:07:30.738 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x0 length 0x8000 00:07:30.738 Nvme2n1 : 6.24 92.27 5.77 0.00 0.00 1198978.32 86305.87 1232480.10 00:07:30.738 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.738 Verification LBA range: start 0x8000 length 0x8000 00:07:30.739 Nvme2n1 : 6.35 89.76 5.61 0.00 0.00 1215673.87 129862.10 1264743.98 00:07:30.739 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.739 Verification LBA range: start 0x0 length 0x8000 00:07:30.739 Nvme2n2 : 6.35 97.22 6.08 0.00 0.00 1098486.80 45976.02 1264743.98 00:07:30.739 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.739 Verification LBA range: start 0x8000 length 0x8000 00:07:30.739 Nvme2n2 : 6.36 88.32 5.52 0.00 0.00 1206871.00 62511.26 2413337.99 00:07:30.739 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.739 Verification LBA range: start 0x0 length 0x8000 00:07:30.739 Nvme2n3 : 6.35 100.81 6.30 0.00 0.00 1025560.89 55655.19 1303460.63 00:07:30.739 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.739 Verification LBA range: start 0x8000 length 0x8000 00:07:30.739 Nvme2n3 : 6.37 92.85 5.80 0.00 0.00 1108070.54 11090.71 2155226.98 00:07:30.739 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:30.739 Verification LBA range: start 0x0 length 0x2000 00:07:30.739 Nvme3n1 : 6.37 110.45 6.90 0.00 0.00 902511.35 5494.94 1335724.50 00:07:30.739 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:30.739 Verification LBA range: start 0x2000 length 0x2000 00:07:30.739 Nvme3n1 : 6.40 102.42 6.40 0.00 0.00 967922.39 9023.80 2206849.18 00:07:30.739 [2024-11-20T09:01:09.658Z] =================================================================================================================== 00:07:30.739 [2024-11-20T09:01:09.658Z] Total : 1276.83 79.80 0.00 0.00 1201784.48 5494.94 2413337.99 00:07:32.650 00:07:32.650 real 0m9.193s 00:07:32.650 user 0m17.411s 00:07:32.650 sys 0m0.246s 00:07:32.650 09:01:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.650 ************************************ 00:07:32.650 END TEST bdev_verify_big_io 00:07:32.650 ************************************ 00:07:32.650 09:01:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 09:01:11 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.650 09:01:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:32.650 09:01:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.650 09:01:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:32.650 ************************************ 00:07:32.650 START TEST bdev_write_zeroes 00:07:32.650 ************************************ 00:07:32.650 09:01:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:32.650 [2024-11-20 09:01:11.422742] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:32.650 [2024-11-20 09:01:11.422860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62197 ] 00:07:32.910 [2024-11-20 09:01:11.584105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.910 [2024-11-20 09:01:11.687852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.478 Running I/O for 1 seconds... 00:07:35.801 14742.00 IOPS, 57.59 MiB/s [2024-11-20T09:01:14.980Z] 7781.00 IOPS, 30.39 MiB/s 00:07:36.061 Latency(us) 00:07:36.061 [2024-11-20T09:01:14.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:36.061 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme0n1 : 2.51 642.84 2.51 0.00 0.00 152428.53 6099.89 1806777.11 00:07:36.061 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme1n1p1 : 1.10 2169.54 8.47 0.00 0.00 58840.36 10838.65 240365.88 00:07:36.061 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme1n1p2 : 1.10 2088.10 8.16 0.00 0.00 60981.91 11141.12 240365.88 00:07:36.061 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme2n1 : 1.10 2085.86 8.15 0.00 0.00 60876.03 11393.18 240365.88 00:07:36.061 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme2n2 : 1.11 2083.64 8.14 0.00 0.00 60885.33 11342.77 235526.30 00:07:36.061 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme2n3 : 1.11 2081.42 8.13 0.00 0.00 60758.97 11494.01 235526.30 00:07:36.061 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:36.061 Nvme3n1 : 1.21 1998.08 7.80 0.00 0.00 60577.23 8166.79 253271.43 00:07:36.061 [2024-11-20T09:01:14.980Z] =================================================================================================================== 00:07:36.061 [2024-11-20T09:01:14.980Z] Total : 13149.48 51.37 0.00 0.00 69967.47 6099.89 1806777.11 00:07:37.002 00:07:37.002 real 0m4.355s 00:07:37.002 user 0m4.032s 00:07:37.002 sys 0m0.201s 00:07:37.002 09:01:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.002 09:01:15 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:37.002 ************************************ 00:07:37.002 END TEST bdev_write_zeroes 00:07:37.002 ************************************ 00:07:37.002 09:01:15 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.002 09:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:37.002 09:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.002 09:01:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.002 ************************************ 00:07:37.002 START TEST bdev_json_nonenclosed 00:07:37.002 ************************************ 00:07:37.002 09:01:15 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.002 [2024-11-20 09:01:15.843133] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:37.002 [2024-11-20 09:01:15.843255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62267 ] 00:07:37.262 [2024-11-20 09:01:16.013781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.262 [2024-11-20 09:01:16.127792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.262 [2024-11-20 09:01:16.127884] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:37.262 [2024-11-20 09:01:16.127902] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:37.262 [2024-11-20 09:01:16.127910] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.523 00:07:37.523 real 0m0.532s 00:07:37.523 user 0m0.332s 00:07:37.523 sys 0m0.095s 00:07:37.523 09:01:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:37.523 ************************************ 00:07:37.523 END TEST bdev_json_nonenclosed 00:07:37.523 ************************************ 00:07:37.523 09:01:16 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:37.523 09:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.523 09:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:37.523 09:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.523 09:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:37.523 ************************************ 00:07:37.523 START TEST bdev_json_nonarray 00:07:37.523 ************************************ 00:07:37.523 09:01:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:37.523 [2024-11-20 09:01:16.435338] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:37.523 [2024-11-20 09:01:16.435458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62292 ] 00:07:37.786 [2024-11-20 09:01:16.596890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.786 [2024-11-20 09:01:16.698485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.786 [2024-11-20 09:01:16.698574] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:37.786 [2024-11-20 09:01:16.698591] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:37.786 [2024-11-20 09:01:16.698601] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.047 00:07:38.047 real 0m0.503s 00:07:38.047 user 0m0.303s 00:07:38.047 sys 0m0.095s 00:07:38.047 ************************************ 00:07:38.047 END TEST bdev_json_nonarray 00:07:38.047 ************************************ 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:38.047 09:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:38.047 09:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:38.047 09:01:16 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:38.047 09:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.047 09:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.047 09:01:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.047 ************************************ 00:07:38.047 START TEST bdev_gpt_uuid 00:07:38.047 ************************************ 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62318 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62318 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62318 ']' 00:07:38.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.047 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.048 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.048 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:38.048 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.048 09:01:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 [2024-11-20 09:01:17.020454] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:38.308 [2024-11-20 09:01:17.020576] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62318 ] 00:07:38.308 [2024-11-20 09:01:17.181057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.568 [2024-11-20 09:01:17.283501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.141 09:01:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.141 09:01:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:39.141 09:01:17 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.141 09:01:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.141 09:01:17 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.401 Some configs were skipped because the RPC state that can call them passed over. 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:39.401 { 00:07:39.401 "name": "Nvme1n1p1", 00:07:39.401 "aliases": [ 00:07:39.401 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:39.401 ], 00:07:39.401 "product_name": "GPT Disk", 00:07:39.401 "block_size": 4096, 00:07:39.401 "num_blocks": 655104, 00:07:39.401 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:39.401 "assigned_rate_limits": { 00:07:39.401 "rw_ios_per_sec": 0, 00:07:39.401 "rw_mbytes_per_sec": 0, 00:07:39.401 "r_mbytes_per_sec": 0, 00:07:39.401 "w_mbytes_per_sec": 0 00:07:39.401 }, 00:07:39.401 "claimed": false, 00:07:39.401 "zoned": false, 00:07:39.401 "supported_io_types": { 00:07:39.401 "read": true, 00:07:39.401 "write": true, 00:07:39.401 "unmap": true, 00:07:39.401 "flush": true, 00:07:39.401 "reset": true, 00:07:39.401 "nvme_admin": false, 00:07:39.401 "nvme_io": false, 00:07:39.401 "nvme_io_md": false, 00:07:39.401 "write_zeroes": true, 00:07:39.401 "zcopy": false, 00:07:39.401 "get_zone_info": false, 00:07:39.401 "zone_management": false, 00:07:39.401 "zone_append": false, 00:07:39.401 "compare": true, 00:07:39.401 "compare_and_write": false, 00:07:39.401 "abort": true, 00:07:39.401 "seek_hole": false, 00:07:39.401 "seek_data": false, 00:07:39.401 "copy": true, 00:07:39.401 "nvme_iov_md": false 00:07:39.401 }, 00:07:39.401 "driver_specific": { 00:07:39.401 "gpt": { 00:07:39.401 "base_bdev": "Nvme1n1", 00:07:39.401 "offset_blocks": 256, 00:07:39.401 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:39.401 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:39.401 "partition_name": "SPDK_TEST_first" 00:07:39.401 } 00:07:39.401 } 00:07:39.401 } 00:07:39.401 ]' 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:39.401 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:39.661 { 00:07:39.661 "name": "Nvme1n1p2", 00:07:39.661 "aliases": [ 00:07:39.661 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:39.661 ], 00:07:39.661 "product_name": "GPT Disk", 00:07:39.661 "block_size": 4096, 00:07:39.661 "num_blocks": 655103, 00:07:39.661 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:39.661 "assigned_rate_limits": { 00:07:39.661 "rw_ios_per_sec": 0, 00:07:39.661 "rw_mbytes_per_sec": 0, 00:07:39.661 "r_mbytes_per_sec": 0, 00:07:39.661 "w_mbytes_per_sec": 0 00:07:39.661 }, 00:07:39.661 "claimed": false, 00:07:39.661 "zoned": false, 00:07:39.661 "supported_io_types": { 00:07:39.661 "read": true, 00:07:39.661 "write": true, 00:07:39.661 "unmap": true, 00:07:39.661 "flush": true, 00:07:39.661 "reset": true, 00:07:39.661 "nvme_admin": false, 00:07:39.661 "nvme_io": false, 00:07:39.661 "nvme_io_md": false, 00:07:39.661 "write_zeroes": true, 00:07:39.661 "zcopy": false, 00:07:39.661 "get_zone_info": false, 00:07:39.661 "zone_management": false, 00:07:39.661 "zone_append": false, 00:07:39.661 "compare": true, 00:07:39.661 "compare_and_write": false, 00:07:39.661 "abort": true, 00:07:39.661 "seek_hole": false, 00:07:39.661 "seek_data": false, 00:07:39.661 "copy": true, 00:07:39.661 "nvme_iov_md": false 00:07:39.661 }, 00:07:39.661 "driver_specific": { 00:07:39.661 "gpt": { 00:07:39.661 "base_bdev": "Nvme1n1", 00:07:39.661 "offset_blocks": 655360, 00:07:39.661 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:39.661 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:39.661 "partition_name": "SPDK_TEST_second" 00:07:39.661 } 00:07:39.661 } 00:07:39.661 } 00:07:39.661 ]' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62318 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62318 ']' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62318 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62318 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.661 killing process with pid 62318 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62318' 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62318 00:07:39.661 09:01:18 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62318 00:07:41.573 00:07:41.573 real 0m3.038s 00:07:41.573 user 0m3.225s 00:07:41.573 sys 0m0.360s 00:07:41.573 09:01:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.573 ************************************ 00:07:41.573 END TEST bdev_gpt_uuid 00:07:41.573 ************************************ 00:07:41.573 09:01:19 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:41.573 09:01:20 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:41.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:41.573 Waiting for block devices as requested 00:07:41.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:41.834 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:42.102 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:47.391 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:47.391 09:01:25 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:47.391 09:01:25 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:47.391 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:47.391 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:47.391 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:47.391 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:47.391 09:01:26 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:47.391 00:07:47.391 real 1m2.194s 00:07:47.391 user 1m18.070s 00:07:47.391 sys 0m9.012s 00:07:47.391 ************************************ 00:07:47.391 END TEST blockdev_nvme_gpt 00:07:47.391 ************************************ 00:07:47.391 09:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.391 09:01:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.391 09:01:26 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:47.391 09:01:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.391 09:01:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.391 09:01:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.391 ************************************ 00:07:47.391 START TEST nvme 00:07:47.391 ************************************ 00:07:47.391 09:01:26 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:47.391 * Looking for test storage... 00:07:47.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:47.391 09:01:26 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:47.391 09:01:26 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:47.391 09:01:26 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:47.652 09:01:26 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:47.652 09:01:26 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.652 09:01:26 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.652 09:01:26 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.652 09:01:26 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.653 09:01:26 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.653 09:01:26 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.653 09:01:26 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.653 09:01:26 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.653 09:01:26 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.653 09:01:26 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:47.653 09:01:26 nvme -- scripts/common.sh@345 -- # : 1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.653 09:01:26 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.653 09:01:26 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@353 -- # local d=1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.653 09:01:26 nvme -- scripts/common.sh@355 -- # echo 1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.653 09:01:26 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:47.653 09:01:26 nvme -- scripts/common.sh@353 -- # local d=2 00:07:47.653 09:01:26 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.653 09:01:26 nvme -- scripts/common.sh@355 -- # echo 2 00:07:47.653 09:01:26 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.653 09:01:26 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.653 09:01:26 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.653 09:01:26 nvme -- scripts/common.sh@368 -- # return 0 00:07:47.653 09:01:26 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.653 09:01:26 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:47.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.653 --rc genhtml_branch_coverage=1 00:07:47.653 --rc genhtml_function_coverage=1 00:07:47.653 --rc genhtml_legend=1 00:07:47.653 --rc geninfo_all_blocks=1 00:07:47.653 --rc geninfo_unexecuted_blocks=1 00:07:47.653 00:07:47.653 ' 00:07:47.653 09:01:26 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:47.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.653 --rc genhtml_branch_coverage=1 00:07:47.653 --rc genhtml_function_coverage=1 00:07:47.653 --rc genhtml_legend=1 00:07:47.653 --rc geninfo_all_blocks=1 00:07:47.653 --rc geninfo_unexecuted_blocks=1 00:07:47.653 00:07:47.653 ' 00:07:47.653 09:01:26 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:47.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.653 --rc genhtml_branch_coverage=1 00:07:47.653 --rc genhtml_function_coverage=1 00:07:47.653 --rc genhtml_legend=1 00:07:47.653 --rc geninfo_all_blocks=1 00:07:47.653 --rc geninfo_unexecuted_blocks=1 00:07:47.653 00:07:47.653 ' 00:07:47.653 09:01:26 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:47.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.653 --rc genhtml_branch_coverage=1 00:07:47.653 --rc genhtml_function_coverage=1 00:07:47.653 --rc genhtml_legend=1 00:07:47.653 --rc geninfo_all_blocks=1 00:07:47.653 --rc geninfo_unexecuted_blocks=1 00:07:47.653 00:07:47.653 ' 00:07:47.653 09:01:26 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:47.915 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.485 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.485 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.485 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.485 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:48.746 09:01:27 nvme -- nvme/nvme.sh@79 -- # uname 00:07:48.746 09:01:27 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:48.746 09:01:27 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:48.746 09:01:27 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:48.746 Waiting for stub to ready for secondary processes... 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1075 -- # stubpid=62951 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62951 ]] 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:48.746 09:01:27 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:48.746 [2024-11-20 09:01:27.499966] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:07:48.746 [2024-11-20 09:01:27.500096] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:49.685 [2024-11-20 09:01:28.273637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.685 [2024-11-20 09:01:28.372908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.685 [2024-11-20 09:01:28.372966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.685 [2024-11-20 09:01:28.372967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.685 [2024-11-20 09:01:28.386297] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:49.685 [2024-11-20 09:01:28.386339] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.685 [2024-11-20 09:01:28.395151] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:49.685 [2024-11-20 09:01:28.395281] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:49.685 [2024-11-20 09:01:28.397719] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.685 [2024-11-20 09:01:28.397942] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:49.685 [2024-11-20 09:01:28.398007] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:49.685 [2024-11-20 09:01:28.400146] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.685 [2024-11-20 09:01:28.400418] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:49.685 [2024-11-20 09:01:28.400487] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:49.685 [2024-11-20 09:01:28.402494] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:49.685 [2024-11-20 09:01:28.402714] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:49.685 [2024-11-20 09:01:28.402760] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:49.685 [2024-11-20 09:01:28.402789] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:49.685 [2024-11-20 09:01:28.402815] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:49.685 09:01:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:49.685 done. 00:07:49.685 09:01:28 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:49.685 09:01:28 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:49.685 09:01:28 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:49.685 09:01:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.685 09:01:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.685 ************************************ 00:07:49.685 START TEST nvme_reset 00:07:49.685 ************************************ 00:07:49.685 09:01:28 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:49.946 Initializing NVMe Controllers 00:07:49.946 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:49.946 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:49.946 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:49.946 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:49.946 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:49.946 00:07:49.946 real 0m0.250s 00:07:49.946 user 0m0.078s 00:07:49.946 sys 0m0.122s 00:07:49.946 09:01:28 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.946 ************************************ 00:07:49.946 END TEST nvme_reset 00:07:49.946 ************************************ 00:07:49.946 09:01:28 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:49.946 09:01:28 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:49.946 09:01:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.946 09:01:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.946 09:01:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.946 ************************************ 00:07:49.946 START TEST nvme_identify 00:07:49.946 ************************************ 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:49.946 09:01:28 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:49.946 09:01:28 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:49.946 09:01:28 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:49.946 09:01:28 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:49.946 09:01:28 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:49.946 09:01:28 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:50.272 [2024-11-20 09:01:29.055297] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62972 terminated unexpected 00:07:50.272 ===================================================== 00:07:50.272 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.272 ===================================================== 00:07:50.272 Controller Capabilities/Features 00:07:50.272 ================================ 00:07:50.272 Vendor ID: 1b36 00:07:50.272 Subsystem Vendor ID: 1af4 00:07:50.272 Serial Number: 12340 00:07:50.272 Model Number: QEMU NVMe Ctrl 00:07:50.272 Firmware Version: 8.0.0 00:07:50.272 Recommended Arb Burst: 6 00:07:50.272 IEEE OUI Identifier: 00 54 52 00:07:50.272 Multi-path I/O 00:07:50.272 May have multiple subsystem ports: No 00:07:50.272 May have multiple controllers: No 00:07:50.272 Associated with SR-IOV VF: No 00:07:50.272 Max Data Transfer Size: 524288 00:07:50.272 Max Number of Namespaces: 256 00:07:50.272 Max Number of I/O Queues: 64 00:07:50.272 NVMe Specification Version (VS): 1.4 00:07:50.272 NVMe Specification Version (Identify): 1.4 00:07:50.272 Maximum Queue Entries: 2048 00:07:50.272 Contiguous Queues Required: Yes 00:07:50.272 Arbitration Mechanisms Supported 00:07:50.272 Weighted Round Robin: Not Supported 00:07:50.272 Vendor Specific: Not Supported 00:07:50.272 Reset Timeout: 7500 ms 00:07:50.272 Doorbell Stride: 4 bytes 00:07:50.272 NVM Subsystem Reset: Not Supported 00:07:50.272 Command Sets Supported 00:07:50.272 NVM Command Set: Supported 00:07:50.272 Boot Partition: Not Supported 00:07:50.272 Memory Page Size Minimum: 4096 bytes 00:07:50.272 Memory Page Size Maximum: 65536 bytes 00:07:50.272 Persistent Memory Region: Not Supported 00:07:50.272 Optional Asynchronous Events Supported 00:07:50.272 Namespace Attribute Notices: Supported 00:07:50.272 Firmware Activation Notices: Not Supported 00:07:50.272 ANA Change Notices: Not Supported 00:07:50.272 PLE Aggregate Log Change Notices: Not Supported 00:07:50.272 LBA Status Info Alert Notices: Not Supported 00:07:50.272 EGE Aggregate Log Change Notices: Not Supported 00:07:50.272 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.272 Zone Descriptor Change Notices: Not Supported 00:07:50.272 Discovery Log Change Notices: Not Supported 00:07:50.272 Controller Attributes 00:07:50.272 128-bit Host Identifier: Not Supported 00:07:50.272 Non-Operational Permissive Mode: Not Supported 00:07:50.272 NVM Sets: Not Supported 00:07:50.272 Read Recovery Levels: Not Supported 00:07:50.272 Endurance Groups: Not Supported 00:07:50.272 Predictable Latency Mode: Not Supported 00:07:50.272 Traffic Based Keep ALive: Not Supported 00:07:50.272 Namespace Granularity: Not Supported 00:07:50.272 SQ Associations: Not Supported 00:07:50.272 UUID List: Not Supported 00:07:50.272 Multi-Domain Subsystem: Not Supported 00:07:50.272 Fixed Capacity Management: Not Supported 00:07:50.272 Variable Capacity Management: Not Supported 00:07:50.272 Delete Endurance Group: Not Supported 00:07:50.272 Delete NVM Set: Not Supported 00:07:50.272 Extended LBA Formats Supported: Supported 00:07:50.272 Flexible Data Placement Supported: Not Supported 00:07:50.272 00:07:50.272 Controller Memory Buffer Support 00:07:50.272 ================================ 00:07:50.272 Supported: No 00:07:50.272 00:07:50.272 Persistent Memory Region Support 00:07:50.272 ================================ 00:07:50.272 Supported: No 00:07:50.272 00:07:50.272 Admin Command Set Attributes 00:07:50.272 ============================ 00:07:50.272 Security Send/Receive: Not Supported 00:07:50.272 Format NVM: Supported 00:07:50.272 Firmware Activate/Download: Not Supported 00:07:50.272 Namespace Management: Supported 00:07:50.272 Device Self-Test: Not Supported 00:07:50.273 Directives: Supported 00:07:50.273 NVMe-MI: Not Supported 00:07:50.273 Virtualization Management: Not Supported 00:07:50.273 Doorbell Buffer Config: Supported 00:07:50.273 Get LBA Status Capability: Not Supported 00:07:50.273 Command & Feature Lockdown Capability: Not Supported 00:07:50.273 Abort Command Limit: 4 00:07:50.273 Async Event Request Limit: 4 00:07:50.273 Number of Firmware Slots: N/A 00:07:50.273 Firmware Slot 1 Read-Only: N/A 00:07:50.273 Firmware Activation Without Reset: N/A 00:07:50.273 Multiple Update Detection Support: N/A 00:07:50.273 Firmware Update Granularity: No Information Provided 00:07:50.273 Per-Namespace SMART Log: Yes 00:07:50.273 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.273 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:50.273 Command Effects Log Page: Supported 00:07:50.273 Get Log Page Extended Data: Supported 00:07:50.273 Telemetry Log Pages: Not Supported 00:07:50.273 Persistent Event Log Pages: Not Supported 00:07:50.273 Supported Log Pages Log Page: May Support 00:07:50.273 Commands Supported & Effects Log Page: Not Supported 00:07:50.273 Feature Identifiers & Effects Log Page:May Support 00:07:50.273 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.273 Data Area 4 for Telemetry Log: Not Supported 00:07:50.273 Error Log Page Entries Supported: 1 00:07:50.273 Keep Alive: Not Supported 00:07:50.273 00:07:50.273 NVM Command Set Attributes 00:07:50.273 ========================== 00:07:50.273 Submission Queue Entry Size 00:07:50.273 Max: 64 00:07:50.273 Min: 64 00:07:50.273 Completion Queue Entry Size 00:07:50.273 Max: 16 00:07:50.273 Min: 16 00:07:50.273 Number of Namespaces: 256 00:07:50.273 Compare Command: Supported 00:07:50.273 Write Uncorrectable Command: Not Supported 00:07:50.273 Dataset Management Command: Supported 00:07:50.273 Write Zeroes Command: Supported 00:07:50.273 Set Features Save Field: Supported 00:07:50.273 Reservations: Not Supported 00:07:50.273 Timestamp: Supported 00:07:50.273 Copy: Supported 00:07:50.273 Volatile Write Cache: Present 00:07:50.273 Atomic Write Unit (Normal): 1 00:07:50.273 Atomic Write Unit (PFail): 1 00:07:50.273 Atomic Compare & Write Unit: 1 00:07:50.273 Fused Compare & Write: Not Supported 00:07:50.273 Scatter-Gather List 00:07:50.273 SGL Command Set: Supported 00:07:50.273 SGL Keyed: Not Supported 00:07:50.273 SGL Bit Bucket Descriptor: Not Supported 00:07:50.273 SGL Metadata Pointer: Not Supported 00:07:50.273 Oversized SGL: Not Supported 00:07:50.273 SGL Metadata Address: Not Supported 00:07:50.273 SGL Offset: Not Supported 00:07:50.273 Transport SGL Data Block: Not Supported 00:07:50.273 Replay Protected Memory Block: Not Supported 00:07:50.273 00:07:50.273 Firmware Slot Information 00:07:50.273 ========================= 00:07:50.273 Active slot: 1 00:07:50.273 Slot 1 Firmware Revision: 1.0 00:07:50.273 00:07:50.273 00:07:50.273 Commands Supported and Effects 00:07:50.273 ============================== 00:07:50.273 Admin Commands 00:07:50.273 -------------- 00:07:50.273 Delete I/O Submission Queue (00h): Supported 00:07:50.273 Create I/O Submission Queue (01h): Supported 00:07:50.273 Get Log Page (02h): Supported 00:07:50.273 Delete I/O Completion Queue (04h): Supported 00:07:50.273 Create I/O Completion Queue (05h): Supported 00:07:50.273 Identify (06h): Supported 00:07:50.273 Abort (08h): Supported 00:07:50.273 Set Features (09h): Supported 00:07:50.273 Get Features (0Ah): Supported 00:07:50.273 Asynchronous Event Request (0Ch): Supported 00:07:50.273 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.273 Directive Send (19h): Supported 00:07:50.273 Directive Receive (1Ah): Supported 00:07:50.273 Virtualization Management (1Ch): Supported 00:07:50.273 Doorbell Buffer Config (7Ch): Supported 00:07:50.273 Format NVM (80h): Supported LBA-Change 00:07:50.273 I/O Commands 00:07:50.273 ------------ 00:07:50.273 Flush (00h): Supported LBA-Change 00:07:50.273 Write (01h): Supported LBA-Change 00:07:50.273 Read (02h): Supported 00:07:50.273 Compare (05h): Supported 00:07:50.273 Write Zeroes (08h): Supported LBA-Change 00:07:50.273 Dataset Management (09h): Supported LBA-Change 00:07:50.273 Unknown (0Ch): Supported 00:07:50.273 Unknown (12h): Supported 00:07:50.273 Copy (19h): Supported LBA-Change 00:07:50.273 Unknown (1Dh): Supported LBA-Change 00:07:50.273 00:07:50.273 Error Log 00:07:50.273 ========= 00:07:50.273 00:07:50.273 Arbitration 00:07:50.273 =========== 00:07:50.273 Arbitration Burst: no limit 00:07:50.273 00:07:50.273 Power Management 00:07:50.273 ================ 00:07:50.273 Number of Power States: 1 00:07:50.273 Current Power State: Power State #0 00:07:50.273 Power State #0: 00:07:50.273 Max Power: 25.00 W 00:07:50.273 Non-Operational State: Operational 00:07:50.273 Entry Latency: 16 microseconds 00:07:50.273 Exit Latency: 4 microseconds 00:07:50.273 Relative Read Throughput: 0 00:07:50.273 Relative Read Latency: 0 00:07:50.273 Relative Write Throughput: 0 00:07:50.273 Relative Write Latency: 0 00:07:50.273 Idle Power[2024-11-20 09:01:29.056736] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62972 terminated unexpected 00:07:50.273 : Not Reported 00:07:50.273 Active Power: Not Reported 00:07:50.273 Non-Operational Permissive Mode: Not Supported 00:07:50.273 00:07:50.273 Health Information 00:07:50.273 ================== 00:07:50.273 Critical Warnings: 00:07:50.273 Available Spare Space: OK 00:07:50.273 Temperature: OK 00:07:50.273 Device Reliability: OK 00:07:50.273 Read Only: No 00:07:50.273 Volatile Memory Backup: OK 00:07:50.273 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.273 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.273 Available Spare: 0% 00:07:50.273 Available Spare Threshold: 0% 00:07:50.273 Life Percentage Used: 0% 00:07:50.273 Data Units Read: 603 00:07:50.273 Data Units Written: 531 00:07:50.273 Host Read Commands: 31486 00:07:50.273 Host Write Commands: 31272 00:07:50.273 Controller Busy Time: 0 minutes 00:07:50.273 Power Cycles: 0 00:07:50.273 Power On Hours: 0 hours 00:07:50.273 Unsafe Shutdowns: 0 00:07:50.273 Unrecoverable Media Errors: 0 00:07:50.273 Lifetime Error Log Entries: 0 00:07:50.273 Warning Temperature Time: 0 minutes 00:07:50.273 Critical Temperature Time: 0 minutes 00:07:50.273 00:07:50.273 Number of Queues 00:07:50.273 ================ 00:07:50.273 Number of I/O Submission Queues: 64 00:07:50.273 Number of I/O Completion Queues: 64 00:07:50.273 00:07:50.273 ZNS Specific Controller Data 00:07:50.273 ============================ 00:07:50.273 Zone Append Size Limit: 0 00:07:50.273 00:07:50.273 00:07:50.273 Active Namespaces 00:07:50.273 ================= 00:07:50.273 Namespace ID:1 00:07:50.273 Error Recovery Timeout: Unlimited 00:07:50.273 Command Set Identifier: NVM (00h) 00:07:50.273 Deallocate: Supported 00:07:50.273 Deallocated/Unwritten Error: Supported 00:07:50.273 Deallocated Read Value: All 0x00 00:07:50.273 Deallocate in Write Zeroes: Not Supported 00:07:50.273 Deallocated Guard Field: 0xFFFF 00:07:50.273 Flush: Supported 00:07:50.273 Reservation: Not Supported 00:07:50.273 Metadata Transferred as: Separate Metadata Buffer 00:07:50.273 Namespace Sharing Capabilities: Private 00:07:50.273 Size (in LBAs): 1548666 (5GiB) 00:07:50.273 Capacity (in LBAs): 1548666 (5GiB) 00:07:50.273 Utilization (in LBAs): 1548666 (5GiB) 00:07:50.273 Thin Provisioning: Not Supported 00:07:50.273 Per-NS Atomic Units: No 00:07:50.273 Maximum Single Source Range Length: 128 00:07:50.273 Maximum Copy Length: 128 00:07:50.273 Maximum Source Range Count: 128 00:07:50.273 NGUID/EUI64 Never Reused: No 00:07:50.273 Namespace Write Protected: No 00:07:50.273 Number of LBA Formats: 8 00:07:50.273 Current LBA Format: LBA Format #07 00:07:50.273 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.273 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.273 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.273 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.273 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.273 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.273 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.273 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.273 00:07:50.273 NVM Specific Namespace Data 00:07:50.273 =========================== 00:07:50.273 Logical Block Storage Tag Mask: 0 00:07:50.273 Protection Information Capabilities: 00:07:50.273 16b Guard Protection Information Storage Tag Support: No 00:07:50.273 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.273 Storage Tag Check Read Support: No 00:07:50.273 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.274 ===================================================== 00:07:50.274 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.274 ===================================================== 00:07:50.274 Controller Capabilities/Features 00:07:50.274 ================================ 00:07:50.274 Vendor ID: 1b36 00:07:50.274 Subsystem Vendor ID: 1af4 00:07:50.274 Serial Number: 12341 00:07:50.274 Model Number: QEMU NVMe Ctrl 00:07:50.274 Firmware Version: 8.0.0 00:07:50.274 Recommended Arb Burst: 6 00:07:50.274 IEEE OUI Identifier: 00 54 52 00:07:50.274 Multi-path I/O 00:07:50.274 May have multiple subsystem ports: No 00:07:50.274 May have multiple controllers: No 00:07:50.274 Associated with SR-IOV VF: No 00:07:50.274 Max Data Transfer Size: 524288 00:07:50.274 Max Number of Namespaces: 256 00:07:50.274 Max Number of I/O Queues: 64 00:07:50.274 NVMe Specification Version (VS): 1.4 00:07:50.274 NVMe Specification Version (Identify): 1.4 00:07:50.274 Maximum Queue Entries: 2048 00:07:50.274 Contiguous Queues Required: Yes 00:07:50.274 Arbitration Mechanisms Supported 00:07:50.274 Weighted Round Robin: Not Supported 00:07:50.274 Vendor Specific: Not Supported 00:07:50.274 Reset Timeout: 7500 ms 00:07:50.274 Doorbell Stride: 4 bytes 00:07:50.274 NVM Subsystem Reset: Not Supported 00:07:50.274 Command Sets Supported 00:07:50.274 NVM Command Set: Supported 00:07:50.274 Boot Partition: Not Supported 00:07:50.274 Memory Page Size Minimum: 4096 bytes 00:07:50.274 Memory Page Size Maximum: 65536 bytes 00:07:50.274 Persistent Memory Region: Not Supported 00:07:50.274 Optional Asynchronous Events Supported 00:07:50.274 Namespace Attribute Notices: Supported 00:07:50.274 Firmware Activation Notices: Not Supported 00:07:50.274 ANA Change Notices: Not Supported 00:07:50.274 PLE Aggregate Log Change Notices: Not Supported 00:07:50.274 LBA Status Info Alert Notices: Not Supported 00:07:50.274 EGE Aggregate Log Change Notices: Not Supported 00:07:50.274 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.274 Zone Descriptor Change Notices: Not Supported 00:07:50.274 Discovery Log Change Notices: Not Supported 00:07:50.274 Controller Attributes 00:07:50.274 128-bit Host Identifier: Not Supported 00:07:50.274 Non-Operational Permissive Mode: Not Supported 00:07:50.274 NVM Sets: Not Supported 00:07:50.274 Read Recovery Levels: Not Supported 00:07:50.274 Endurance Groups: Not Supported 00:07:50.274 Predictable Latency Mode: Not Supported 00:07:50.274 Traffic Based Keep ALive: Not Supported 00:07:50.274 Namespace Granularity: Not Supported 00:07:50.274 SQ Associations: Not Supported 00:07:50.274 UUID List: Not Supported 00:07:50.274 Multi-Domain Subsystem: Not Supported 00:07:50.274 Fixed Capacity Management: Not Supported 00:07:50.274 Variable Capacity Management: Not Supported 00:07:50.274 Delete Endurance Group: Not Supported 00:07:50.274 Delete NVM Set: Not Supported 00:07:50.274 Extended LBA Formats Supported: Supported 00:07:50.274 Flexible Data Placement Supported: Not Supported 00:07:50.274 00:07:50.274 Controller Memory Buffer Support 00:07:50.274 ================================ 00:07:50.274 Supported: No 00:07:50.274 00:07:50.274 Persistent Memory Region Support 00:07:50.274 ================================ 00:07:50.274 Supported: No 00:07:50.274 00:07:50.274 Admin Command Set Attributes 00:07:50.274 ============================ 00:07:50.274 Security Send/Receive: Not Supported 00:07:50.274 Format NVM: Supported 00:07:50.274 Firmware Activate/Download: Not Supported 00:07:50.274 Namespace Management: Supported 00:07:50.274 Device Self-Test: Not Supported 00:07:50.274 Directives: Supported 00:07:50.274 NVMe-MI: Not Supported 00:07:50.274 Virtualization Management: Not Supported 00:07:50.274 Doorbell Buffer Config: Supported 00:07:50.274 Get LBA Status Capability: Not Supported 00:07:50.274 Command & Feature Lockdown Capability: Not Supported 00:07:50.274 Abort Command Limit: 4 00:07:50.274 Async Event Request Limit: 4 00:07:50.274 Number of Firmware Slots: N/A 00:07:50.274 Firmware Slot 1 Read-Only: N/A 00:07:50.274 Firmware Activation Without Reset: N/A 00:07:50.274 Multiple Update Detection Support: N/A 00:07:50.274 Firmware Update Granularity: No Information Provided 00:07:50.274 Per-Namespace SMART Log: Yes 00:07:50.274 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.274 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:50.274 Command Effects Log Page: Supported 00:07:50.274 Get Log Page Extended Data: Supported 00:07:50.274 Telemetry Log Pages: Not Supported 00:07:50.274 Persistent Event Log Pages: Not Supported 00:07:50.274 Supported Log Pages Log Page: May Support 00:07:50.274 Commands Supported & Effects Log Page: Not Supported 00:07:50.274 Feature Identifiers & Effects Log Page:May Support 00:07:50.274 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.274 Data Area 4 for Telemetry Log: Not Supported 00:07:50.274 Error Log Page Entries Supported: 1 00:07:50.274 Keep Alive: Not Supported 00:07:50.274 00:07:50.274 NVM Command Set Attributes 00:07:50.274 ========================== 00:07:50.274 Submission Queue Entry Size 00:07:50.274 Max: 64 00:07:50.274 Min: 64 00:07:50.274 Completion Queue Entry Size 00:07:50.274 Max: 16 00:07:50.274 Min: 16 00:07:50.274 Number of Namespaces: 256 00:07:50.274 Compare Command: Supported 00:07:50.274 Write Uncorrectable Command: Not Supported 00:07:50.274 Dataset Management Command: Supported 00:07:50.274 Write Zeroes Command: Supported 00:07:50.274 Set Features Save Field: Supported 00:07:50.274 Reservations: Not Supported 00:07:50.274 Timestamp: Supported 00:07:50.274 Copy: Supported 00:07:50.274 Volatile Write Cache: Present 00:07:50.274 Atomic Write Unit (Normal): 1 00:07:50.274 Atomic Write Unit (PFail): 1 00:07:50.274 Atomic Compare & Write Unit: 1 00:07:50.274 Fused Compare & Write: Not Supported 00:07:50.274 Scatter-Gather List 00:07:50.274 SGL Command Set: Supported 00:07:50.274 SGL Keyed: Not Supported 00:07:50.274 SGL Bit Bucket Descriptor: Not Supported 00:07:50.274 SGL Metadata Pointer: Not Supported 00:07:50.274 Oversized SGL: Not Supported 00:07:50.274 SGL Metadata Address: Not Supported 00:07:50.274 SGL Offset: Not Supported 00:07:50.274 Transport SGL Data Block: Not Supported 00:07:50.274 Replay Protected Memory Block: Not Supported 00:07:50.274 00:07:50.274 Firmware Slot Information 00:07:50.274 ========================= 00:07:50.274 Active slot: 1 00:07:50.274 Slot 1 Firmware Revision: 1.0 00:07:50.274 00:07:50.274 00:07:50.274 Commands Supported and Effects 00:07:50.274 ============================== 00:07:50.274 Admin Commands 00:07:50.274 -------------- 00:07:50.274 Delete I/O Submission Queue (00h): Supported 00:07:50.274 Create I/O Submission Queue (01h): Supported 00:07:50.274 Get Log Page (02h): Supported 00:07:50.274 Delete I/O Completion Queue (04h): Supported 00:07:50.274 Create I/O Completion Queue (05h): Supported 00:07:50.274 Identify (06h): Supported 00:07:50.274 Abort (08h): Supported 00:07:50.274 Set Features (09h): Supported 00:07:50.274 Get Features (0Ah): Supported 00:07:50.274 Asynchronous Event Request (0Ch): Supported 00:07:50.274 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.274 Directive Send (19h): Supported 00:07:50.274 Directive Receive (1Ah): Supported 00:07:50.274 Virtualization Management (1Ch): Supported 00:07:50.274 Doorbell Buffer Config (7Ch): Supported 00:07:50.274 Format NVM (80h): Supported LBA-Change 00:07:50.274 I/O Commands 00:07:50.274 ------------ 00:07:50.274 Flush (00h): Supported LBA-Change 00:07:50.274 Write (01h): Supported LBA-Change 00:07:50.274 Read (02h): Supported 00:07:50.274 Compare (05h): Supported 00:07:50.274 Write Zeroes (08h): Supported LBA-Change 00:07:50.274 Dataset Management (09h): Supported LBA-Change 00:07:50.274 Unknown (0Ch): Supported 00:07:50.274 Unknown (12h): Supported 00:07:50.274 Copy (19h): Supported LBA-Change 00:07:50.274 Unknown (1Dh): Supported LBA-Change 00:07:50.274 00:07:50.274 Error Log 00:07:50.274 ========= 00:07:50.274 00:07:50.274 Arbitration 00:07:50.274 =========== 00:07:50.274 Arbitration Burst: no limit 00:07:50.274 00:07:50.274 Power Management 00:07:50.274 ================ 00:07:50.274 Number of Power States: 1 00:07:50.275 Current Power State: Power State #0 00:07:50.275 Power State #0: 00:07:50.275 Max Power: 25.00 W 00:07:50.275 Non-Operational State: Operational 00:07:50.275 Entry Latency: 16 microseconds 00:07:50.275 Exit Latency: 4 microseconds 00:07:50.275 Relative Read Throughput: 0 00:07:50.275 Relative Read Latency: 0 00:07:50.275 Relative Write Throughput: 0 00:07:50.275 Relative Write Latency: 0 00:07:50.275 Idle Power: Not Reported 00:07:50.275 Active Power: Not Reported 00:07:50.275 Non-Operational Permissive Mode: Not Supported 00:07:50.275 00:07:50.275 Health Information 00:07:50.275 ================== 00:07:50.275 Critical Warnings: 00:07:50.275 Available Spare Space: OK 00:07:50.275 Temperature: [2024-11-20 09:01:29.059082] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62972 terminated unexpected 00:07:50.275 OK 00:07:50.275 Device Reliability: OK 00:07:50.275 Read Only: No 00:07:50.275 Volatile Memory Backup: OK 00:07:50.275 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.275 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.275 Available Spare: 0% 00:07:50.275 Available Spare Threshold: 0% 00:07:50.275 Life Percentage Used: 0% 00:07:50.275 Data Units Read: 921 00:07:50.275 Data Units Written: 794 00:07:50.275 Host Read Commands: 46616 00:07:50.275 Host Write Commands: 45513 00:07:50.275 Controller Busy Time: 0 minutes 00:07:50.275 Power Cycles: 0 00:07:50.275 Power On Hours: 0 hours 00:07:50.275 Unsafe Shutdowns: 0 00:07:50.275 Unrecoverable Media Errors: 0 00:07:50.275 Lifetime Error Log Entries: 0 00:07:50.275 Warning Temperature Time: 0 minutes 00:07:50.275 Critical Temperature Time: 0 minutes 00:07:50.275 00:07:50.275 Number of Queues 00:07:50.275 ================ 00:07:50.275 Number of I/O Submission Queues: 64 00:07:50.275 Number of I/O Completion Queues: 64 00:07:50.275 00:07:50.275 ZNS Specific Controller Data 00:07:50.275 ============================ 00:07:50.275 Zone Append Size Limit: 0 00:07:50.275 00:07:50.275 00:07:50.275 Active Namespaces 00:07:50.275 ================= 00:07:50.275 Namespace ID:1 00:07:50.275 Error Recovery Timeout: Unlimited 00:07:50.275 Command Set Identifier: NVM (00h) 00:07:50.275 Deallocate: Supported 00:07:50.275 Deallocated/Unwritten Error: Supported 00:07:50.275 Deallocated Read Value: All 0x00 00:07:50.275 Deallocate in Write Zeroes: Not Supported 00:07:50.275 Deallocated Guard Field: 0xFFFF 00:07:50.275 Flush: Supported 00:07:50.275 Reservation: Not Supported 00:07:50.275 Namespace Sharing Capabilities: Private 00:07:50.275 Size (in LBAs): 1310720 (5GiB) 00:07:50.275 Capacity (in LBAs): 1310720 (5GiB) 00:07:50.275 Utilization (in LBAs): 1310720 (5GiB) 00:07:50.275 Thin Provisioning: Not Supported 00:07:50.275 Per-NS Atomic Units: No 00:07:50.275 Maximum Single Source Range Length: 128 00:07:50.275 Maximum Copy Length: 128 00:07:50.275 Maximum Source Range Count: 128 00:07:50.275 NGUID/EUI64 Never Reused: No 00:07:50.275 Namespace Write Protected: No 00:07:50.275 Number of LBA Formats: 8 00:07:50.275 Current LBA Format: LBA Format #04 00:07:50.275 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.275 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.275 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.275 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.275 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.275 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.275 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.275 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.275 00:07:50.275 NVM Specific Namespace Data 00:07:50.275 =========================== 00:07:50.275 Logical Block Storage Tag Mask: 0 00:07:50.275 Protection Information Capabilities: 00:07:50.275 16b Guard Protection Information Storage Tag Support: No 00:07:50.275 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.275 Storage Tag Check Read Support: No 00:07:50.275 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.275 ===================================================== 00:07:50.275 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.275 ===================================================== 00:07:50.275 Controller Capabilities/Features 00:07:50.275 ================================ 00:07:50.275 Vendor ID: 1b36 00:07:50.275 Subsystem Vendor ID: 1af4 00:07:50.275 Serial Number: 12343 00:07:50.275 Model Number: QEMU NVMe Ctrl 00:07:50.275 Firmware Version: 8.0.0 00:07:50.275 Recommended Arb Burst: 6 00:07:50.275 IEEE OUI Identifier: 00 54 52 00:07:50.275 Multi-path I/O 00:07:50.275 May have multiple subsystem ports: No 00:07:50.275 May have multiple controllers: Yes 00:07:50.275 Associated with SR-IOV VF: No 00:07:50.275 Max Data Transfer Size: 524288 00:07:50.275 Max Number of Namespaces: 256 00:07:50.275 Max Number of I/O Queues: 64 00:07:50.275 NVMe Specification Version (VS): 1.4 00:07:50.275 NVMe Specification Version (Identify): 1.4 00:07:50.275 Maximum Queue Entries: 2048 00:07:50.275 Contiguous Queues Required: Yes 00:07:50.275 Arbitration Mechanisms Supported 00:07:50.275 Weighted Round Robin: Not Supported 00:07:50.275 Vendor Specific: Not Supported 00:07:50.275 Reset Timeout: 7500 ms 00:07:50.275 Doorbell Stride: 4 bytes 00:07:50.275 NVM Subsystem Reset: Not Supported 00:07:50.275 Command Sets Supported 00:07:50.275 NVM Command Set: Supported 00:07:50.275 Boot Partition: Not Supported 00:07:50.275 Memory Page Size Minimum: 4096 bytes 00:07:50.275 Memory Page Size Maximum: 65536 bytes 00:07:50.275 Persistent Memory Region: Not Supported 00:07:50.275 Optional Asynchronous Events Supported 00:07:50.275 Namespace Attribute Notices: Supported 00:07:50.275 Firmware Activation Notices: Not Supported 00:07:50.275 ANA Change Notices: Not Supported 00:07:50.275 PLE Aggregate Log Change Notices: Not Supported 00:07:50.275 LBA Status Info Alert Notices: Not Supported 00:07:50.275 EGE Aggregate Log Change Notices: Not Supported 00:07:50.275 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.275 Zone Descriptor Change Notices: Not Supported 00:07:50.275 Discovery Log Change Notices: Not Supported 00:07:50.275 Controller Attributes 00:07:50.275 128-bit Host Identifier: Not Supported 00:07:50.275 Non-Operational Permissive Mode: Not Supported 00:07:50.275 NVM Sets: Not Supported 00:07:50.275 Read Recovery Levels: Not Supported 00:07:50.275 Endurance Groups: Supported 00:07:50.275 Predictable Latency Mode: Not Supported 00:07:50.275 Traffic Based Keep ALive: Not Supported 00:07:50.275 Namespace Granularity: Not Supported 00:07:50.275 SQ Associations: Not Supported 00:07:50.275 UUID List: Not Supported 00:07:50.275 Multi-Domain Subsystem: Not Supported 00:07:50.275 Fixed Capacity Management: Not Supported 00:07:50.275 Variable Capacity Management: Not Supported 00:07:50.275 Delete Endurance Group: Not Supported 00:07:50.275 Delete NVM Set: Not Supported 00:07:50.275 Extended LBA Formats Supported: Supported 00:07:50.275 Flexible Data Placement Supported: Supported 00:07:50.275 00:07:50.275 Controller Memory Buffer Support 00:07:50.275 ================================ 00:07:50.275 Supported: No 00:07:50.275 00:07:50.275 Persistent Memory Region Support 00:07:50.275 ================================ 00:07:50.275 Supported: No 00:07:50.275 00:07:50.275 Admin Command Set Attributes 00:07:50.275 ============================ 00:07:50.275 Security Send/Receive: Not Supported 00:07:50.275 Format NVM: Supported 00:07:50.275 Firmware Activate/Download: Not Supported 00:07:50.275 Namespace Management: Supported 00:07:50.275 Device Self-Test: Not Supported 00:07:50.275 Directives: Supported 00:07:50.275 NVMe-MI: Not Supported 00:07:50.275 Virtualization Management: Not Supported 00:07:50.275 Doorbell Buffer Config: Supported 00:07:50.275 Get LBA Status Capability: Not Supported 00:07:50.275 Command & Feature Lockdown Capability: Not Supported 00:07:50.275 Abort Command Limit: 4 00:07:50.275 Async Event Request Limit: 4 00:07:50.275 Number of Firmware Slots: N/A 00:07:50.275 Firmware Slot 1 Read-Only: N/A 00:07:50.275 Firmware Activation Without Reset: N/A 00:07:50.275 Multiple Update Detection Support: N/A 00:07:50.275 Firmware Update Granularity: No Information Provided 00:07:50.275 Per-Namespace SMART Log: Yes 00:07:50.276 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.276 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:50.276 Command Effects Log Page: Supported 00:07:50.276 Get Log Page Extended Data: Supported 00:07:50.276 Telemetry Log Pages: Not Supported 00:07:50.276 Persistent Event Log Pages: Not Supported 00:07:50.276 Supported Log Pages Log Page: May Support 00:07:50.276 Commands Supported & Effects Log Page: Not Supported 00:07:50.276 Feature Identifiers & Effects Log Page:May Support 00:07:50.276 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.276 Data Area 4 for Telemetry Log: Not Supported 00:07:50.276 Error Log Page Entries Supported: 1 00:07:50.276 Keep Alive: Not Supported 00:07:50.276 00:07:50.276 NVM Command Set Attributes 00:07:50.276 ========================== 00:07:50.276 Submission Queue Entry Size 00:07:50.276 Max: 64 00:07:50.276 Min: 64 00:07:50.276 Completion Queue Entry Size 00:07:50.276 Max: 16 00:07:50.276 Min: 16 00:07:50.276 Number of Namespaces: 256 00:07:50.276 Compare Command: Supported 00:07:50.276 Write Uncorrectable Command: Not Supported 00:07:50.276 Dataset Management Command: Supported 00:07:50.276 Write Zeroes Command: Supported 00:07:50.276 Set Features Save Field: Supported 00:07:50.276 Reservations: Not Supported 00:07:50.276 Timestamp: Supported 00:07:50.276 Copy: Supported 00:07:50.276 Volatile Write Cache: Present 00:07:50.276 Atomic Write Unit (Normal): 1 00:07:50.276 Atomic Write Unit (PFail): 1 00:07:50.276 Atomic Compare & Write Unit: 1 00:07:50.276 Fused Compare & Write: Not Supported 00:07:50.276 Scatter-Gather List 00:07:50.276 SGL Command Set: Supported 00:07:50.276 SGL Keyed: Not Supported 00:07:50.276 SGL Bit Bucket Descriptor: Not Supported 00:07:50.276 SGL Metadata Pointer: Not Supported 00:07:50.276 Oversized SGL: Not Supported 00:07:50.276 SGL Metadata Address: Not Supported 00:07:50.276 SGL Offset: Not Supported 00:07:50.276 Transport SGL Data Block: Not Supported 00:07:50.276 Replay Protected Memory Block: Not Supported 00:07:50.276 00:07:50.276 Firmware Slot Information 00:07:50.276 ========================= 00:07:50.276 Active slot: 1 00:07:50.276 Slot 1 Firmware Revision: 1.0 00:07:50.276 00:07:50.276 00:07:50.276 Commands Supported and Effects 00:07:50.276 ============================== 00:07:50.276 Admin Commands 00:07:50.276 -------------- 00:07:50.276 Delete I/O Submission Queue (00h): Supported 00:07:50.276 Create I/O Submission Queue (01h): Supported 00:07:50.276 Get Log Page (02h): Supported 00:07:50.276 Delete I/O Completion Queue (04h): Supported 00:07:50.276 Create I/O Completion Queue (05h): Supported 00:07:50.276 Identify (06h): Supported 00:07:50.276 Abort (08h): Supported 00:07:50.276 Set Features (09h): Supported 00:07:50.276 Get Features (0Ah): Supported 00:07:50.276 Asynchronous Event Request (0Ch): Supported 00:07:50.276 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.276 Directive Send (19h): Supported 00:07:50.276 Directive Receive (1Ah): Supported 00:07:50.276 Virtualization Management (1Ch): Supported 00:07:50.276 Doorbell Buffer Config (7Ch): Supported 00:07:50.276 Format NVM (80h): Supported LBA-Change 00:07:50.276 I/O Commands 00:07:50.276 ------------ 00:07:50.276 Flush (00h): Supported LBA-Change 00:07:50.276 Write (01h): Supported LBA-Change 00:07:50.276 Read (02h): Supported 00:07:50.276 Compare (05h): Supported 00:07:50.276 Write Zeroes (08h): Supported LBA-Change 00:07:50.276 Dataset Management (09h): Supported LBA-Change 00:07:50.276 Unknown (0Ch): Supported 00:07:50.276 Unknown (12h): Supported 00:07:50.276 Copy (19h): Supported LBA-Change 00:07:50.276 Unknown (1Dh): Supported LBA-Change 00:07:50.276 00:07:50.276 Error Log 00:07:50.276 ========= 00:07:50.276 00:07:50.276 Arbitration 00:07:50.276 =========== 00:07:50.276 Arbitration Burst: no limit 00:07:50.276 00:07:50.276 Power Management 00:07:50.276 ================ 00:07:50.276 Number of Power States: 1 00:07:50.276 Current Power State: Power State #0 00:07:50.276 Power State #0: 00:07:50.276 Max Power: 25.00 W 00:07:50.276 Non-Operational State: Operational 00:07:50.276 Entry Latency: 16 microseconds 00:07:50.276 Exit Latency: 4 microseconds 00:07:50.276 Relative Read Throughput: 0 00:07:50.276 Relative Read Latency: 0 00:07:50.276 Relative Write Throughput: 0 00:07:50.276 Relative Write Latency: 0 00:07:50.276 Idle Power: Not Reported 00:07:50.276 Active Power: Not Reported 00:07:50.276 Non-Operational Permissive Mode: Not Supported 00:07:50.276 00:07:50.276 Health Information 00:07:50.276 ================== 00:07:50.276 Critical Warnings: 00:07:50.276 Available Spare Space: OK 00:07:50.276 Temperature: OK 00:07:50.276 Device Reliability: OK 00:07:50.276 Read Only: No 00:07:50.276 Volatile Memory Backup: OK 00:07:50.276 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.276 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.276 Available Spare: 0% 00:07:50.276 Available Spare Threshold: 0% 00:07:50.276 Life Percentage Used: 0% 00:07:50.276 Data Units Read: 713 00:07:50.276 Data Units Written: 642 00:07:50.276 Host Read Commands: 32826 00:07:50.276 Host Write Commands: 32252 00:07:50.276 Controller Busy Time: 0 minutes 00:07:50.276 Power Cycles: 0 00:07:50.276 Power On Hours: 0 hours 00:07:50.276 Unsafe Shutdowns: 0 00:07:50.276 Unrecoverable Media Errors: 0 00:07:50.276 Lifetime Error Log Entries: 0 00:07:50.276 Warning Temperature Time: 0 minutes 00:07:50.276 Critical Temperature Time: 0 minutes 00:07:50.276 00:07:50.276 Number of Queues 00:07:50.276 ================ 00:07:50.276 Number of I/O Submission Queues: 64 00:07:50.276 Number of I/O Completion Queues: 64 00:07:50.276 00:07:50.276 ZNS Specific Controller Data 00:07:50.276 ============================ 00:07:50.276 Zone Append Size Limit: 0 00:07:50.276 00:07:50.276 00:07:50.276 Active Namespaces 00:07:50.276 ================= 00:07:50.276 Namespace ID:1 00:07:50.276 Error Recovery Timeout: Unlimited 00:07:50.276 Command Set Identifier: NVM (00h) 00:07:50.276 Deallocate: Supported 00:07:50.276 Deallocated/Unwritten Error: Supported 00:07:50.276 Deallocated Read Value: All 0x00 00:07:50.276 Deallocate in Write Zeroes: Not Supported 00:07:50.276 Deallocated Guard Field: 0xFFFF 00:07:50.276 Flush: Supported 00:07:50.276 Reservation: Not Supported 00:07:50.276 Namespace Sharing Capabilities: Multiple Controllers 00:07:50.276 Size (in LBAs): 262144 (1GiB) 00:07:50.276 Capacity (in LBAs): 262144 (1GiB) 00:07:50.276 Utilization (in LBAs): 262144 (1GiB) 00:07:50.276 Thin Provisioning: Not Supported 00:07:50.276 Per-NS Atomic Units: No 00:07:50.276 Maximum Single Source Range Length: 128 00:07:50.276 Maximum Copy Length: 128 00:07:50.276 Maximum Source Range Count: 128 00:07:50.276 NGUID/EUI64 Never Reused: No 00:07:50.276 Namespace Write Protected: No 00:07:50.276 Endurance group ID: 1 00:07:50.276 Number of LBA Formats: 8 00:07:50.276 Current LBA Format: LBA Format #04 00:07:50.276 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.276 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.276 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.276 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.276 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.276 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.276 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.276 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.276 00:07:50.276 Get Feature FDP: 00:07:50.276 ================ 00:07:50.276 Enabled: Yes 00:07:50.276 FDP configuration index: 0 00:07:50.276 00:07:50.276 FDP configurations log page 00:07:50.276 =========================== 00:07:50.276 Number of FDP configurations: 1 00:07:50.276 Version: 0 00:07:50.276 Size: 112 00:07:50.276 FDP Configuration Descriptor: 0 00:07:50.276 Descriptor Size: 96 00:07:50.276 Reclaim Group Identifier format: 2 00:07:50.276 FDP Volatile Write Cache: Not Present 00:07:50.276 FDP Configuration: Valid 00:07:50.276 Vendor Specific Size: 0 00:07:50.276 Number of Reclaim Groups: 2 00:07:50.276 Number of Recalim Unit Handles: 8 00:07:50.276 Max Placement Identifiers: 128 00:07:50.276 Number of Namespaces Suppprted: 256 00:07:50.276 Reclaim unit Nominal Size: 6000000 bytes 00:07:50.276 Estimated Reclaim Unit Time Limit: Not Reported 00:07:50.276 RUH Desc #000: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #001: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #002: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #003: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #004: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #005: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #006: RUH Type: Initially Isolated 00:07:50.277 RUH Desc #007: RUH Type: Initially Isolated 00:07:50.277 00:07:50.277 FDP reclaim unit handle usage log page 00:07:50.277 ====================================== 00:07:50.277 Number of Reclaim Unit Handles: 8 00:07:50.277 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:50.277 RUH Usage Desc #001: RUH Attributes: Unused 00:07:50.277 RUH Usage Desc #002: RUH Attributes: Unused 00:07:50.277 RUH Usage Desc #003: RUH Attributes: Unused 00:07:50.277 RUH Usage Desc #004: RUH Attributes: Unused 00:07:50.277 RUH Usage Desc #005: RUH Attributes: Unused 00:07:50.277 RUH Usage Desc #006: RUH Attributes: Unused 00:07:50.277 RUH Usage Desc #007: RUH Attributes: Unused 00:07:50.277 00:07:50.277 FDP statistics log page 00:07:50.277 ======================= 00:07:50.277 Host bytes with metadata written: 385724416 00:07:50.277 Media[2024-11-20 09:01:29.060853] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62972 terminated unexpected 00:07:50.277 bytes with metadata written: 385765376 00:07:50.277 Media bytes erased: 0 00:07:50.277 00:07:50.277 FDP events log page 00:07:50.277 =================== 00:07:50.277 Number of FDP events: 0 00:07:50.277 00:07:50.277 NVM Specific Namespace Data 00:07:50.277 =========================== 00:07:50.277 Logical Block Storage Tag Mask: 0 00:07:50.277 Protection Information Capabilities: 00:07:50.277 16b Guard Protection Information Storage Tag Support: No 00:07:50.277 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.277 Storage Tag Check Read Support: No 00:07:50.277 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.277 ===================================================== 00:07:50.277 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.277 ===================================================== 00:07:50.277 Controller Capabilities/Features 00:07:50.277 ================================ 00:07:50.277 Vendor ID: 1b36 00:07:50.277 Subsystem Vendor ID: 1af4 00:07:50.277 Serial Number: 12342 00:07:50.277 Model Number: QEMU NVMe Ctrl 00:07:50.277 Firmware Version: 8.0.0 00:07:50.277 Recommended Arb Burst: 6 00:07:50.277 IEEE OUI Identifier: 00 54 52 00:07:50.277 Multi-path I/O 00:07:50.277 May have multiple subsystem ports: No 00:07:50.277 May have multiple controllers: No 00:07:50.277 Associated with SR-IOV VF: No 00:07:50.277 Max Data Transfer Size: 524288 00:07:50.277 Max Number of Namespaces: 256 00:07:50.277 Max Number of I/O Queues: 64 00:07:50.277 NVMe Specification Version (VS): 1.4 00:07:50.277 NVMe Specification Version (Identify): 1.4 00:07:50.277 Maximum Queue Entries: 2048 00:07:50.277 Contiguous Queues Required: Yes 00:07:50.277 Arbitration Mechanisms Supported 00:07:50.277 Weighted Round Robin: Not Supported 00:07:50.277 Vendor Specific: Not Supported 00:07:50.277 Reset Timeout: 7500 ms 00:07:50.277 Doorbell Stride: 4 bytes 00:07:50.277 NVM Subsystem Reset: Not Supported 00:07:50.277 Command Sets Supported 00:07:50.277 NVM Command Set: Supported 00:07:50.277 Boot Partition: Not Supported 00:07:50.277 Memory Page Size Minimum: 4096 bytes 00:07:50.277 Memory Page Size Maximum: 65536 bytes 00:07:50.277 Persistent Memory Region: Not Supported 00:07:50.277 Optional Asynchronous Events Supported 00:07:50.277 Namespace Attribute Notices: Supported 00:07:50.277 Firmware Activation Notices: Not Supported 00:07:50.277 ANA Change Notices: Not Supported 00:07:50.277 PLE Aggregate Log Change Notices: Not Supported 00:07:50.277 LBA Status Info Alert Notices: Not Supported 00:07:50.277 EGE Aggregate Log Change Notices: Not Supported 00:07:50.277 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.277 Zone Descriptor Change Notices: Not Supported 00:07:50.277 Discovery Log Change Notices: Not Supported 00:07:50.277 Controller Attributes 00:07:50.277 128-bit Host Identifier: Not Supported 00:07:50.277 Non-Operational Permissive Mode: Not Supported 00:07:50.277 NVM Sets: Not Supported 00:07:50.277 Read Recovery Levels: Not Supported 00:07:50.277 Endurance Groups: Not Supported 00:07:50.277 Predictable Latency Mode: Not Supported 00:07:50.277 Traffic Based Keep ALive: Not Supported 00:07:50.277 Namespace Granularity: Not Supported 00:07:50.277 SQ Associations: Not Supported 00:07:50.277 UUID List: Not Supported 00:07:50.277 Multi-Domain Subsystem: Not Supported 00:07:50.277 Fixed Capacity Management: Not Supported 00:07:50.277 Variable Capacity Management: Not Supported 00:07:50.277 Delete Endurance Group: Not Supported 00:07:50.277 Delete NVM Set: Not Supported 00:07:50.277 Extended LBA Formats Supported: Supported 00:07:50.277 Flexible Data Placement Supported: Not Supported 00:07:50.277 00:07:50.277 Controller Memory Buffer Support 00:07:50.277 ================================ 00:07:50.277 Supported: No 00:07:50.277 00:07:50.277 Persistent Memory Region Support 00:07:50.277 ================================ 00:07:50.277 Supported: No 00:07:50.277 00:07:50.277 Admin Command Set Attributes 00:07:50.277 ============================ 00:07:50.277 Security Send/Receive: Not Supported 00:07:50.277 Format NVM: Supported 00:07:50.277 Firmware Activate/Download: Not Supported 00:07:50.277 Namespace Management: Supported 00:07:50.277 Device Self-Test: Not Supported 00:07:50.277 Directives: Supported 00:07:50.277 NVMe-MI: Not Supported 00:07:50.277 Virtualization Management: Not Supported 00:07:50.277 Doorbell Buffer Config: Supported 00:07:50.277 Get LBA Status Capability: Not Supported 00:07:50.277 Command & Feature Lockdown Capability: Not Supported 00:07:50.277 Abort Command Limit: 4 00:07:50.277 Async Event Request Limit: 4 00:07:50.277 Number of Firmware Slots: N/A 00:07:50.277 Firmware Slot 1 Read-Only: N/A 00:07:50.277 Firmware Activation Without Reset: N/A 00:07:50.277 Multiple Update Detection Support: N/A 00:07:50.277 Firmware Update Granularity: No Information Provided 00:07:50.277 Per-Namespace SMART Log: Yes 00:07:50.277 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.277 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:50.277 Command Effects Log Page: Supported 00:07:50.277 Get Log Page Extended Data: Supported 00:07:50.277 Telemetry Log Pages: Not Supported 00:07:50.277 Persistent Event Log Pages: Not Supported 00:07:50.277 Supported Log Pages Log Page: May Support 00:07:50.277 Commands Supported & Effects Log Page: Not Supported 00:07:50.277 Feature Identifiers & Effects Log Page:May Support 00:07:50.277 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.277 Data Area 4 for Telemetry Log: Not Supported 00:07:50.277 Error Log Page Entries Supported: 1 00:07:50.277 Keep Alive: Not Supported 00:07:50.277 00:07:50.277 NVM Command Set Attributes 00:07:50.277 ========================== 00:07:50.277 Submission Queue Entry Size 00:07:50.277 Max: 64 00:07:50.278 Min: 64 00:07:50.278 Completion Queue Entry Size 00:07:50.278 Max: 16 00:07:50.278 Min: 16 00:07:50.278 Number of Namespaces: 256 00:07:50.278 Compare Command: Supported 00:07:50.278 Write Uncorrectable Command: Not Supported 00:07:50.278 Dataset Management Command: Supported 00:07:50.278 Write Zeroes Command: Supported 00:07:50.278 Set Features Save Field: Supported 00:07:50.278 Reservations: Not Supported 00:07:50.278 Timestamp: Supported 00:07:50.278 Copy: Supported 00:07:50.278 Volatile Write Cache: Present 00:07:50.278 Atomic Write Unit (Normal): 1 00:07:50.278 Atomic Write Unit (PFail): 1 00:07:50.278 Atomic Compare & Write Unit: 1 00:07:50.278 Fused Compare & Write: Not Supported 00:07:50.278 Scatter-Gather List 00:07:50.278 SGL Command Set: Supported 00:07:50.278 SGL Keyed: Not Supported 00:07:50.278 SGL Bit Bucket Descriptor: Not Supported 00:07:50.278 SGL Metadata Pointer: Not Supported 00:07:50.278 Oversized SGL: Not Supported 00:07:50.278 SGL Metadata Address: Not Supported 00:07:50.278 SGL Offset: Not Supported 00:07:50.278 Transport SGL Data Block: Not Supported 00:07:50.278 Replay Protected Memory Block: Not Supported 00:07:50.278 00:07:50.278 Firmware Slot Information 00:07:50.278 ========================= 00:07:50.278 Active slot: 1 00:07:50.278 Slot 1 Firmware Revision: 1.0 00:07:50.278 00:07:50.278 00:07:50.278 Commands Supported and Effects 00:07:50.278 ============================== 00:07:50.278 Admin Commands 00:07:50.278 -------------- 00:07:50.278 Delete I/O Submission Queue (00h): Supported 00:07:50.278 Create I/O Submission Queue (01h): Supported 00:07:50.278 Get Log Page (02h): Supported 00:07:50.278 Delete I/O Completion Queue (04h): Supported 00:07:50.278 Create I/O Completion Queue (05h): Supported 00:07:50.278 Identify (06h): Supported 00:07:50.278 Abort (08h): Supported 00:07:50.278 Set Features (09h): Supported 00:07:50.278 Get Features (0Ah): Supported 00:07:50.278 Asynchronous Event Request (0Ch): Supported 00:07:50.278 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.278 Directive Send (19h): Supported 00:07:50.278 Directive Receive (1Ah): Supported 00:07:50.278 Virtualization Management (1Ch): Supported 00:07:50.278 Doorbell Buffer Config (7Ch): Supported 00:07:50.278 Format NVM (80h): Supported LBA-Change 00:07:50.278 I/O Commands 00:07:50.278 ------------ 00:07:50.278 Flush (00h): Supported LBA-Change 00:07:50.278 Write (01h): Supported LBA-Change 00:07:50.278 Read (02h): Supported 00:07:50.278 Compare (05h): Supported 00:07:50.278 Write Zeroes (08h): Supported LBA-Change 00:07:50.278 Dataset Management (09h): Supported LBA-Change 00:07:50.278 Unknown (0Ch): Supported 00:07:50.278 Unknown (12h): Supported 00:07:50.278 Copy (19h): Supported LBA-Change 00:07:50.278 Unknown (1Dh): Supported LBA-Change 00:07:50.278 00:07:50.278 Error Log 00:07:50.278 ========= 00:07:50.278 00:07:50.278 Arbitration 00:07:50.278 =========== 00:07:50.278 Arbitration Burst: no limit 00:07:50.278 00:07:50.278 Power Management 00:07:50.278 ================ 00:07:50.278 Number of Power States: 1 00:07:50.278 Current Power State: Power State #0 00:07:50.278 Power State #0: 00:07:50.278 Max Power: 25.00 W 00:07:50.278 Non-Operational State: Operational 00:07:50.278 Entry Latency: 16 microseconds 00:07:50.278 Exit Latency: 4 microseconds 00:07:50.278 Relative Read Throughput: 0 00:07:50.278 Relative Read Latency: 0 00:07:50.278 Relative Write Throughput: 0 00:07:50.278 Relative Write Latency: 0 00:07:50.278 Idle Power: Not Reported 00:07:50.278 Active Power: Not Reported 00:07:50.278 Non-Operational Permissive Mode: Not Supported 00:07:50.278 00:07:50.278 Health Information 00:07:50.278 ================== 00:07:50.278 Critical Warnings: 00:07:50.278 Available Spare Space: OK 00:07:50.278 Temperature: OK 00:07:50.278 Device Reliability: OK 00:07:50.278 Read Only: No 00:07:50.278 Volatile Memory Backup: OK 00:07:50.278 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.278 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.278 Available Spare: 0% 00:07:50.278 Available Spare Threshold: 0% 00:07:50.278 Life Percentage Used: 0% 00:07:50.278 Data Units Read: 1957 00:07:50.278 Data Units Written: 1745 00:07:50.278 Host Read Commands: 96833 00:07:50.278 Host Write Commands: 95102 00:07:50.278 Controller Busy Time: 0 minutes 00:07:50.278 Power Cycles: 0 00:07:50.278 Power On Hours: 0 hours 00:07:50.278 Unsafe Shutdowns: 0 00:07:50.278 Unrecoverable Media Errors: 0 00:07:50.278 Lifetime Error Log Entries: 0 00:07:50.278 Warning Temperature Time: 0 minutes 00:07:50.278 Critical Temperature Time: 0 minutes 00:07:50.278 00:07:50.278 Number of Queues 00:07:50.278 ================ 00:07:50.278 Number of I/O Submission Queues: 64 00:07:50.278 Number of I/O Completion Queues: 64 00:07:50.278 00:07:50.278 ZNS Specific Controller Data 00:07:50.278 ============================ 00:07:50.278 Zone Append Size Limit: 0 00:07:50.278 00:07:50.278 00:07:50.278 Active Namespaces 00:07:50.278 ================= 00:07:50.278 Namespace ID:1 00:07:50.278 Error Recovery Timeout: Unlimited 00:07:50.278 Command Set Identifier: NVM (00h) 00:07:50.278 Deallocate: Supported 00:07:50.278 Deallocated/Unwritten Error: Supported 00:07:50.278 Deallocated Read Value: All 0x00 00:07:50.278 Deallocate in Write Zeroes: Not Supported 00:07:50.278 Deallocated Guard Field: 0xFFFF 00:07:50.278 Flush: Supported 00:07:50.278 Reservation: Not Supported 00:07:50.278 Namespace Sharing Capabilities: Private 00:07:50.278 Size (in LBAs): 1048576 (4GiB) 00:07:50.278 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.278 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.278 Thin Provisioning: Not Supported 00:07:50.278 Per-NS Atomic Units: No 00:07:50.278 Maximum Single Source Range Length: 128 00:07:50.278 Maximum Copy Length: 128 00:07:50.278 Maximum Source Range Count: 128 00:07:50.278 NGUID/EUI64 Never Reused: No 00:07:50.278 Namespace Write Protected: No 00:07:50.278 Number of LBA Formats: 8 00:07:50.278 Current LBA Format: LBA Format #04 00:07:50.278 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.278 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.278 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.278 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.278 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.278 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.278 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.278 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.278 00:07:50.278 NVM Specific Namespace Data 00:07:50.278 =========================== 00:07:50.278 Logical Block Storage Tag Mask: 0 00:07:50.278 Protection Information Capabilities: 00:07:50.278 16b Guard Protection Information Storage Tag Support: No 00:07:50.278 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.278 Storage Tag Check Read Support: No 00:07:50.278 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.278 Namespace ID:2 00:07:50.278 Error Recovery Timeout: Unlimited 00:07:50.278 Command Set Identifier: NVM (00h) 00:07:50.278 Deallocate: Supported 00:07:50.278 Deallocated/Unwritten Error: Supported 00:07:50.278 Deallocated Read Value: All 0x00 00:07:50.278 Deallocate in Write Zeroes: Not Supported 00:07:50.278 Deallocated Guard Field: 0xFFFF 00:07:50.278 Flush: Supported 00:07:50.278 Reservation: Not Supported 00:07:50.278 Namespace Sharing Capabilities: Private 00:07:50.278 Size (in LBAs): 1048576 (4GiB) 00:07:50.278 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.278 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.278 Thin Provisioning: Not Supported 00:07:50.278 Per-NS Atomic Units: No 00:07:50.278 Maximum Single Source Range Length: 128 00:07:50.278 Maximum Copy Length: 128 00:07:50.278 Maximum Source Range Count: 128 00:07:50.278 NGUID/EUI64 Never Reused: No 00:07:50.278 Namespace Write Protected: No 00:07:50.278 Number of LBA Formats: 8 00:07:50.278 Current LBA Format: LBA Format #04 00:07:50.278 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.278 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.279 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.279 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.279 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.279 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.279 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.279 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.279 00:07:50.279 NVM Specific Namespace Data 00:07:50.279 =========================== 00:07:50.279 Logical Block Storage Tag Mask: 0 00:07:50.279 Protection Information Capabilities: 00:07:50.279 16b Guard Protection Information Storage Tag Support: No 00:07:50.279 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.279 Storage Tag Check Read Support: No 00:07:50.279 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Namespace ID:3 00:07:50.279 Error Recovery Timeout: Unlimited 00:07:50.279 Command Set Identifier: NVM (00h) 00:07:50.279 Deallocate: Supported 00:07:50.279 Deallocated/Unwritten Error: Supported 00:07:50.279 Deallocated Read Value: All 0x00 00:07:50.279 Deallocate in Write Zeroes: Not Supported 00:07:50.279 Deallocated Guard Field: 0xFFFF 00:07:50.279 Flush: Supported 00:07:50.279 Reservation: Not Supported 00:07:50.279 Namespace Sharing Capabilities: Private 00:07:50.279 Size (in LBAs): 1048576 (4GiB) 00:07:50.279 Capacity (in LBAs): 1048576 (4GiB) 00:07:50.279 Utilization (in LBAs): 1048576 (4GiB) 00:07:50.279 Thin Provisioning: Not Supported 00:07:50.279 Per-NS Atomic Units: No 00:07:50.279 Maximum Single Source Range Length: 128 00:07:50.279 Maximum Copy Length: 128 00:07:50.279 Maximum Source Range Count: 128 00:07:50.279 NGUID/EUI64 Never Reused: No 00:07:50.279 Namespace Write Protected: No 00:07:50.279 Number of LBA Formats: 8 00:07:50.279 Current LBA Format: LBA Format #04 00:07:50.279 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.279 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.279 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.279 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.279 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.279 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.279 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.279 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.279 00:07:50.279 NVM Specific Namespace Data 00:07:50.279 =========================== 00:07:50.279 Logical Block Storage Tag Mask: 0 00:07:50.279 Protection Information Capabilities: 00:07:50.279 16b Guard Protection Information Storage Tag Support: No 00:07:50.279 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.279 Storage Tag Check Read Support: No 00:07:50.279 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.279 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:50.279 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:50.553 ===================================================== 00:07:50.553 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.553 ===================================================== 00:07:50.553 Controller Capabilities/Features 00:07:50.553 ================================ 00:07:50.553 Vendor ID: 1b36 00:07:50.553 Subsystem Vendor ID: 1af4 00:07:50.553 Serial Number: 12340 00:07:50.553 Model Number: QEMU NVMe Ctrl 00:07:50.553 Firmware Version: 8.0.0 00:07:50.553 Recommended Arb Burst: 6 00:07:50.553 IEEE OUI Identifier: 00 54 52 00:07:50.553 Multi-path I/O 00:07:50.553 May have multiple subsystem ports: No 00:07:50.553 May have multiple controllers: No 00:07:50.553 Associated with SR-IOV VF: No 00:07:50.553 Max Data Transfer Size: 524288 00:07:50.553 Max Number of Namespaces: 256 00:07:50.553 Max Number of I/O Queues: 64 00:07:50.553 NVMe Specification Version (VS): 1.4 00:07:50.553 NVMe Specification Version (Identify): 1.4 00:07:50.553 Maximum Queue Entries: 2048 00:07:50.553 Contiguous Queues Required: Yes 00:07:50.553 Arbitration Mechanisms Supported 00:07:50.553 Weighted Round Robin: Not Supported 00:07:50.553 Vendor Specific: Not Supported 00:07:50.553 Reset Timeout: 7500 ms 00:07:50.553 Doorbell Stride: 4 bytes 00:07:50.553 NVM Subsystem Reset: Not Supported 00:07:50.553 Command Sets Supported 00:07:50.553 NVM Command Set: Supported 00:07:50.553 Boot Partition: Not Supported 00:07:50.553 Memory Page Size Minimum: 4096 bytes 00:07:50.553 Memory Page Size Maximum: 65536 bytes 00:07:50.553 Persistent Memory Region: Not Supported 00:07:50.553 Optional Asynchronous Events Supported 00:07:50.553 Namespace Attribute Notices: Supported 00:07:50.553 Firmware Activation Notices: Not Supported 00:07:50.553 ANA Change Notices: Not Supported 00:07:50.553 PLE Aggregate Log Change Notices: Not Supported 00:07:50.553 LBA Status Info Alert Notices: Not Supported 00:07:50.553 EGE Aggregate Log Change Notices: Not Supported 00:07:50.553 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.553 Zone Descriptor Change Notices: Not Supported 00:07:50.553 Discovery Log Change Notices: Not Supported 00:07:50.553 Controller Attributes 00:07:50.553 128-bit Host Identifier: Not Supported 00:07:50.553 Non-Operational Permissive Mode: Not Supported 00:07:50.553 NVM Sets: Not Supported 00:07:50.553 Read Recovery Levels: Not Supported 00:07:50.553 Endurance Groups: Not Supported 00:07:50.553 Predictable Latency Mode: Not Supported 00:07:50.553 Traffic Based Keep ALive: Not Supported 00:07:50.553 Namespace Granularity: Not Supported 00:07:50.553 SQ Associations: Not Supported 00:07:50.553 UUID List: Not Supported 00:07:50.553 Multi-Domain Subsystem: Not Supported 00:07:50.553 Fixed Capacity Management: Not Supported 00:07:50.553 Variable Capacity Management: Not Supported 00:07:50.553 Delete Endurance Group: Not Supported 00:07:50.553 Delete NVM Set: Not Supported 00:07:50.553 Extended LBA Formats Supported: Supported 00:07:50.553 Flexible Data Placement Supported: Not Supported 00:07:50.553 00:07:50.553 Controller Memory Buffer Support 00:07:50.553 ================================ 00:07:50.553 Supported: No 00:07:50.553 00:07:50.553 Persistent Memory Region Support 00:07:50.553 ================================ 00:07:50.553 Supported: No 00:07:50.553 00:07:50.553 Admin Command Set Attributes 00:07:50.553 ============================ 00:07:50.553 Security Send/Receive: Not Supported 00:07:50.553 Format NVM: Supported 00:07:50.553 Firmware Activate/Download: Not Supported 00:07:50.553 Namespace Management: Supported 00:07:50.553 Device Self-Test: Not Supported 00:07:50.553 Directives: Supported 00:07:50.553 NVMe-MI: Not Supported 00:07:50.553 Virtualization Management: Not Supported 00:07:50.553 Doorbell Buffer Config: Supported 00:07:50.553 Get LBA Status Capability: Not Supported 00:07:50.553 Command & Feature Lockdown Capability: Not Supported 00:07:50.553 Abort Command Limit: 4 00:07:50.553 Async Event Request Limit: 4 00:07:50.553 Number of Firmware Slots: N/A 00:07:50.553 Firmware Slot 1 Read-Only: N/A 00:07:50.553 Firmware Activation Without Reset: N/A 00:07:50.553 Multiple Update Detection Support: N/A 00:07:50.553 Firmware Update Granularity: No Information Provided 00:07:50.553 Per-Namespace SMART Log: Yes 00:07:50.553 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.553 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:50.553 Command Effects Log Page: Supported 00:07:50.553 Get Log Page Extended Data: Supported 00:07:50.553 Telemetry Log Pages: Not Supported 00:07:50.553 Persistent Event Log Pages: Not Supported 00:07:50.553 Supported Log Pages Log Page: May Support 00:07:50.553 Commands Supported & Effects Log Page: Not Supported 00:07:50.553 Feature Identifiers & Effects Log Page:May Support 00:07:50.553 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.553 Data Area 4 for Telemetry Log: Not Supported 00:07:50.553 Error Log Page Entries Supported: 1 00:07:50.553 Keep Alive: Not Supported 00:07:50.553 00:07:50.553 NVM Command Set Attributes 00:07:50.553 ========================== 00:07:50.553 Submission Queue Entry Size 00:07:50.553 Max: 64 00:07:50.553 Min: 64 00:07:50.553 Completion Queue Entry Size 00:07:50.553 Max: 16 00:07:50.553 Min: 16 00:07:50.553 Number of Namespaces: 256 00:07:50.553 Compare Command: Supported 00:07:50.553 Write Uncorrectable Command: Not Supported 00:07:50.553 Dataset Management Command: Supported 00:07:50.553 Write Zeroes Command: Supported 00:07:50.553 Set Features Save Field: Supported 00:07:50.553 Reservations: Not Supported 00:07:50.553 Timestamp: Supported 00:07:50.553 Copy: Supported 00:07:50.553 Volatile Write Cache: Present 00:07:50.553 Atomic Write Unit (Normal): 1 00:07:50.553 Atomic Write Unit (PFail): 1 00:07:50.553 Atomic Compare & Write Unit: 1 00:07:50.553 Fused Compare & Write: Not Supported 00:07:50.554 Scatter-Gather List 00:07:50.554 SGL Command Set: Supported 00:07:50.554 SGL Keyed: Not Supported 00:07:50.554 SGL Bit Bucket Descriptor: Not Supported 00:07:50.554 SGL Metadata Pointer: Not Supported 00:07:50.554 Oversized SGL: Not Supported 00:07:50.554 SGL Metadata Address: Not Supported 00:07:50.554 SGL Offset: Not Supported 00:07:50.554 Transport SGL Data Block: Not Supported 00:07:50.554 Replay Protected Memory Block: Not Supported 00:07:50.554 00:07:50.554 Firmware Slot Information 00:07:50.554 ========================= 00:07:50.554 Active slot: 1 00:07:50.554 Slot 1 Firmware Revision: 1.0 00:07:50.554 00:07:50.554 00:07:50.554 Commands Supported and Effects 00:07:50.554 ============================== 00:07:50.554 Admin Commands 00:07:50.554 -------------- 00:07:50.554 Delete I/O Submission Queue (00h): Supported 00:07:50.554 Create I/O Submission Queue (01h): Supported 00:07:50.554 Get Log Page (02h): Supported 00:07:50.554 Delete I/O Completion Queue (04h): Supported 00:07:50.554 Create I/O Completion Queue (05h): Supported 00:07:50.554 Identify (06h): Supported 00:07:50.554 Abort (08h): Supported 00:07:50.554 Set Features (09h): Supported 00:07:50.554 Get Features (0Ah): Supported 00:07:50.554 Asynchronous Event Request (0Ch): Supported 00:07:50.554 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.554 Directive Send (19h): Supported 00:07:50.554 Directive Receive (1Ah): Supported 00:07:50.554 Virtualization Management (1Ch): Supported 00:07:50.554 Doorbell Buffer Config (7Ch): Supported 00:07:50.554 Format NVM (80h): Supported LBA-Change 00:07:50.554 I/O Commands 00:07:50.554 ------------ 00:07:50.554 Flush (00h): Supported LBA-Change 00:07:50.554 Write (01h): Supported LBA-Change 00:07:50.554 Read (02h): Supported 00:07:50.554 Compare (05h): Supported 00:07:50.554 Write Zeroes (08h): Supported LBA-Change 00:07:50.554 Dataset Management (09h): Supported LBA-Change 00:07:50.554 Unknown (0Ch): Supported 00:07:50.554 Unknown (12h): Supported 00:07:50.554 Copy (19h): Supported LBA-Change 00:07:50.554 Unknown (1Dh): Supported LBA-Change 00:07:50.554 00:07:50.554 Error Log 00:07:50.554 ========= 00:07:50.554 00:07:50.554 Arbitration 00:07:50.554 =========== 00:07:50.554 Arbitration Burst: no limit 00:07:50.554 00:07:50.554 Power Management 00:07:50.554 ================ 00:07:50.554 Number of Power States: 1 00:07:50.554 Current Power State: Power State #0 00:07:50.554 Power State #0: 00:07:50.554 Max Power: 25.00 W 00:07:50.554 Non-Operational State: Operational 00:07:50.554 Entry Latency: 16 microseconds 00:07:50.554 Exit Latency: 4 microseconds 00:07:50.554 Relative Read Throughput: 0 00:07:50.554 Relative Read Latency: 0 00:07:50.554 Relative Write Throughput: 0 00:07:50.554 Relative Write Latency: 0 00:07:50.554 Idle Power: Not Reported 00:07:50.554 Active Power: Not Reported 00:07:50.554 Non-Operational Permissive Mode: Not Supported 00:07:50.554 00:07:50.554 Health Information 00:07:50.554 ================== 00:07:50.554 Critical Warnings: 00:07:50.554 Available Spare Space: OK 00:07:50.554 Temperature: OK 00:07:50.554 Device Reliability: OK 00:07:50.554 Read Only: No 00:07:50.554 Volatile Memory Backup: OK 00:07:50.554 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.554 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.554 Available Spare: 0% 00:07:50.554 Available Spare Threshold: 0% 00:07:50.554 Life Percentage Used: 0% 00:07:50.554 Data Units Read: 603 00:07:50.554 Data Units Written: 531 00:07:50.554 Host Read Commands: 31486 00:07:50.554 Host Write Commands: 31272 00:07:50.554 Controller Busy Time: 0 minutes 00:07:50.554 Power Cycles: 0 00:07:50.554 Power On Hours: 0 hours 00:07:50.554 Unsafe Shutdowns: 0 00:07:50.554 Unrecoverable Media Errors: 0 00:07:50.554 Lifetime Error Log Entries: 0 00:07:50.554 Warning Temperature Time: 0 minutes 00:07:50.554 Critical Temperature Time: 0 minutes 00:07:50.554 00:07:50.554 Number of Queues 00:07:50.554 ================ 00:07:50.554 Number of I/O Submission Queues: 64 00:07:50.554 Number of I/O Completion Queues: 64 00:07:50.554 00:07:50.554 ZNS Specific Controller Data 00:07:50.554 ============================ 00:07:50.554 Zone Append Size Limit: 0 00:07:50.554 00:07:50.554 00:07:50.554 Active Namespaces 00:07:50.554 ================= 00:07:50.554 Namespace ID:1 00:07:50.554 Error Recovery Timeout: Unlimited 00:07:50.554 Command Set Identifier: NVM (00h) 00:07:50.554 Deallocate: Supported 00:07:50.554 Deallocated/Unwritten Error: Supported 00:07:50.554 Deallocated Read Value: All 0x00 00:07:50.554 Deallocate in Write Zeroes: Not Supported 00:07:50.554 Deallocated Guard Field: 0xFFFF 00:07:50.554 Flush: Supported 00:07:50.554 Reservation: Not Supported 00:07:50.554 Metadata Transferred as: Separate Metadata Buffer 00:07:50.554 Namespace Sharing Capabilities: Private 00:07:50.554 Size (in LBAs): 1548666 (5GiB) 00:07:50.554 Capacity (in LBAs): 1548666 (5GiB) 00:07:50.554 Utilization (in LBAs): 1548666 (5GiB) 00:07:50.554 Thin Provisioning: Not Supported 00:07:50.554 Per-NS Atomic Units: No 00:07:50.554 Maximum Single Source Range Length: 128 00:07:50.554 Maximum Copy Length: 128 00:07:50.554 Maximum Source Range Count: 128 00:07:50.554 NGUID/EUI64 Never Reused: No 00:07:50.554 Namespace Write Protected: No 00:07:50.554 Number of LBA Formats: 8 00:07:50.554 Current LBA Format: LBA Format #07 00:07:50.554 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.554 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.554 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.554 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.554 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.554 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.554 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.554 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.554 00:07:50.554 NVM Specific Namespace Data 00:07:50.554 =========================== 00:07:50.554 Logical Block Storage Tag Mask: 0 00:07:50.554 Protection Information Capabilities: 00:07:50.554 16b Guard Protection Information Storage Tag Support: No 00:07:50.554 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.554 Storage Tag Check Read Support: No 00:07:50.554 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.554 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.555 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:50.555 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:50.816 ===================================================== 00:07:50.816 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.816 ===================================================== 00:07:50.816 Controller Capabilities/Features 00:07:50.816 ================================ 00:07:50.816 Vendor ID: 1b36 00:07:50.816 Subsystem Vendor ID: 1af4 00:07:50.816 Serial Number: 12341 00:07:50.816 Model Number: QEMU NVMe Ctrl 00:07:50.816 Firmware Version: 8.0.0 00:07:50.816 Recommended Arb Burst: 6 00:07:50.816 IEEE OUI Identifier: 00 54 52 00:07:50.816 Multi-path I/O 00:07:50.816 May have multiple subsystem ports: No 00:07:50.816 May have multiple controllers: No 00:07:50.816 Associated with SR-IOV VF: No 00:07:50.816 Max Data Transfer Size: 524288 00:07:50.816 Max Number of Namespaces: 256 00:07:50.816 Max Number of I/O Queues: 64 00:07:50.816 NVMe Specification Version (VS): 1.4 00:07:50.816 NVMe Specification Version (Identify): 1.4 00:07:50.816 Maximum Queue Entries: 2048 00:07:50.816 Contiguous Queues Required: Yes 00:07:50.816 Arbitration Mechanisms Supported 00:07:50.816 Weighted Round Robin: Not Supported 00:07:50.816 Vendor Specific: Not Supported 00:07:50.816 Reset Timeout: 7500 ms 00:07:50.816 Doorbell Stride: 4 bytes 00:07:50.816 NVM Subsystem Reset: Not Supported 00:07:50.816 Command Sets Supported 00:07:50.816 NVM Command Set: Supported 00:07:50.816 Boot Partition: Not Supported 00:07:50.816 Memory Page Size Minimum: 4096 bytes 00:07:50.816 Memory Page Size Maximum: 65536 bytes 00:07:50.816 Persistent Memory Region: Not Supported 00:07:50.816 Optional Asynchronous Events Supported 00:07:50.816 Namespace Attribute Notices: Supported 00:07:50.816 Firmware Activation Notices: Not Supported 00:07:50.816 ANA Change Notices: Not Supported 00:07:50.816 PLE Aggregate Log Change Notices: Not Supported 00:07:50.816 LBA Status Info Alert Notices: Not Supported 00:07:50.816 EGE Aggregate Log Change Notices: Not Supported 00:07:50.816 Normal NVM Subsystem Shutdown event: Not Supported 00:07:50.816 Zone Descriptor Change Notices: Not Supported 00:07:50.816 Discovery Log Change Notices: Not Supported 00:07:50.816 Controller Attributes 00:07:50.816 128-bit Host Identifier: Not Supported 00:07:50.816 Non-Operational Permissive Mode: Not Supported 00:07:50.816 NVM Sets: Not Supported 00:07:50.816 Read Recovery Levels: Not Supported 00:07:50.816 Endurance Groups: Not Supported 00:07:50.816 Predictable Latency Mode: Not Supported 00:07:50.816 Traffic Based Keep ALive: Not Supported 00:07:50.816 Namespace Granularity: Not Supported 00:07:50.816 SQ Associations: Not Supported 00:07:50.816 UUID List: Not Supported 00:07:50.816 Multi-Domain Subsystem: Not Supported 00:07:50.816 Fixed Capacity Management: Not Supported 00:07:50.816 Variable Capacity Management: Not Supported 00:07:50.816 Delete Endurance Group: Not Supported 00:07:50.816 Delete NVM Set: Not Supported 00:07:50.816 Extended LBA Formats Supported: Supported 00:07:50.816 Flexible Data Placement Supported: Not Supported 00:07:50.816 00:07:50.816 Controller Memory Buffer Support 00:07:50.816 ================================ 00:07:50.816 Supported: No 00:07:50.816 00:07:50.816 Persistent Memory Region Support 00:07:50.816 ================================ 00:07:50.816 Supported: No 00:07:50.816 00:07:50.816 Admin Command Set Attributes 00:07:50.816 ============================ 00:07:50.816 Security Send/Receive: Not Supported 00:07:50.816 Format NVM: Supported 00:07:50.816 Firmware Activate/Download: Not Supported 00:07:50.816 Namespace Management: Supported 00:07:50.816 Device Self-Test: Not Supported 00:07:50.816 Directives: Supported 00:07:50.816 NVMe-MI: Not Supported 00:07:50.816 Virtualization Management: Not Supported 00:07:50.816 Doorbell Buffer Config: Supported 00:07:50.816 Get LBA Status Capability: Not Supported 00:07:50.816 Command & Feature Lockdown Capability: Not Supported 00:07:50.816 Abort Command Limit: 4 00:07:50.816 Async Event Request Limit: 4 00:07:50.816 Number of Firmware Slots: N/A 00:07:50.816 Firmware Slot 1 Read-Only: N/A 00:07:50.816 Firmware Activation Without Reset: N/A 00:07:50.816 Multiple Update Detection Support: N/A 00:07:50.816 Firmware Update Granularity: No Information Provided 00:07:50.816 Per-Namespace SMART Log: Yes 00:07:50.816 Asymmetric Namespace Access Log Page: Not Supported 00:07:50.816 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:50.816 Command Effects Log Page: Supported 00:07:50.816 Get Log Page Extended Data: Supported 00:07:50.816 Telemetry Log Pages: Not Supported 00:07:50.816 Persistent Event Log Pages: Not Supported 00:07:50.816 Supported Log Pages Log Page: May Support 00:07:50.816 Commands Supported & Effects Log Page: Not Supported 00:07:50.816 Feature Identifiers & Effects Log Page:May Support 00:07:50.816 NVMe-MI Commands & Effects Log Page: May Support 00:07:50.816 Data Area 4 for Telemetry Log: Not Supported 00:07:50.816 Error Log Page Entries Supported: 1 00:07:50.816 Keep Alive: Not Supported 00:07:50.816 00:07:50.816 NVM Command Set Attributes 00:07:50.816 ========================== 00:07:50.816 Submission Queue Entry Size 00:07:50.816 Max: 64 00:07:50.816 Min: 64 00:07:50.816 Completion Queue Entry Size 00:07:50.816 Max: 16 00:07:50.816 Min: 16 00:07:50.816 Number of Namespaces: 256 00:07:50.816 Compare Command: Supported 00:07:50.816 Write Uncorrectable Command: Not Supported 00:07:50.816 Dataset Management Command: Supported 00:07:50.816 Write Zeroes Command: Supported 00:07:50.816 Set Features Save Field: Supported 00:07:50.816 Reservations: Not Supported 00:07:50.816 Timestamp: Supported 00:07:50.817 Copy: Supported 00:07:50.817 Volatile Write Cache: Present 00:07:50.817 Atomic Write Unit (Normal): 1 00:07:50.817 Atomic Write Unit (PFail): 1 00:07:50.817 Atomic Compare & Write Unit: 1 00:07:50.817 Fused Compare & Write: Not Supported 00:07:50.817 Scatter-Gather List 00:07:50.817 SGL Command Set: Supported 00:07:50.817 SGL Keyed: Not Supported 00:07:50.817 SGL Bit Bucket Descriptor: Not Supported 00:07:50.817 SGL Metadata Pointer: Not Supported 00:07:50.817 Oversized SGL: Not Supported 00:07:50.817 SGL Metadata Address: Not Supported 00:07:50.817 SGL Offset: Not Supported 00:07:50.817 Transport SGL Data Block: Not Supported 00:07:50.817 Replay Protected Memory Block: Not Supported 00:07:50.817 00:07:50.817 Firmware Slot Information 00:07:50.817 ========================= 00:07:50.817 Active slot: 1 00:07:50.817 Slot 1 Firmware Revision: 1.0 00:07:50.817 00:07:50.817 00:07:50.817 Commands Supported and Effects 00:07:50.817 ============================== 00:07:50.817 Admin Commands 00:07:50.817 -------------- 00:07:50.817 Delete I/O Submission Queue (00h): Supported 00:07:50.817 Create I/O Submission Queue (01h): Supported 00:07:50.817 Get Log Page (02h): Supported 00:07:50.817 Delete I/O Completion Queue (04h): Supported 00:07:50.817 Create I/O Completion Queue (05h): Supported 00:07:50.817 Identify (06h): Supported 00:07:50.817 Abort (08h): Supported 00:07:50.817 Set Features (09h): Supported 00:07:50.817 Get Features (0Ah): Supported 00:07:50.817 Asynchronous Event Request (0Ch): Supported 00:07:50.817 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:50.817 Directive Send (19h): Supported 00:07:50.817 Directive Receive (1Ah): Supported 00:07:50.817 Virtualization Management (1Ch): Supported 00:07:50.817 Doorbell Buffer Config (7Ch): Supported 00:07:50.817 Format NVM (80h): Supported LBA-Change 00:07:50.817 I/O Commands 00:07:50.817 ------------ 00:07:50.817 Flush (00h): Supported LBA-Change 00:07:50.817 Write (01h): Supported LBA-Change 00:07:50.817 Read (02h): Supported 00:07:50.817 Compare (05h): Supported 00:07:50.817 Write Zeroes (08h): Supported LBA-Change 00:07:50.817 Dataset Management (09h): Supported LBA-Change 00:07:50.817 Unknown (0Ch): Supported 00:07:50.817 Unknown (12h): Supported 00:07:50.817 Copy (19h): Supported LBA-Change 00:07:50.817 Unknown (1Dh): Supported LBA-Change 00:07:50.817 00:07:50.817 Error Log 00:07:50.817 ========= 00:07:50.817 00:07:50.817 Arbitration 00:07:50.817 =========== 00:07:50.817 Arbitration Burst: no limit 00:07:50.817 00:07:50.817 Power Management 00:07:50.817 ================ 00:07:50.817 Number of Power States: 1 00:07:50.817 Current Power State: Power State #0 00:07:50.817 Power State #0: 00:07:50.817 Max Power: 25.00 W 00:07:50.817 Non-Operational State: Operational 00:07:50.817 Entry Latency: 16 microseconds 00:07:50.817 Exit Latency: 4 microseconds 00:07:50.817 Relative Read Throughput: 0 00:07:50.817 Relative Read Latency: 0 00:07:50.817 Relative Write Throughput: 0 00:07:50.817 Relative Write Latency: 0 00:07:50.817 Idle Power: Not Reported 00:07:50.817 Active Power: Not Reported 00:07:50.817 Non-Operational Permissive Mode: Not Supported 00:07:50.817 00:07:50.817 Health Information 00:07:50.817 ================== 00:07:50.817 Critical Warnings: 00:07:50.817 Available Spare Space: OK 00:07:50.817 Temperature: OK 00:07:50.817 Device Reliability: OK 00:07:50.817 Read Only: No 00:07:50.817 Volatile Memory Backup: OK 00:07:50.817 Current Temperature: 323 Kelvin (50 Celsius) 00:07:50.817 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:50.817 Available Spare: 0% 00:07:50.817 Available Spare Threshold: 0% 00:07:50.817 Life Percentage Used: 0% 00:07:50.817 Data Units Read: 921 00:07:50.817 Data Units Written: 794 00:07:50.817 Host Read Commands: 46616 00:07:50.817 Host Write Commands: 45513 00:07:50.817 Controller Busy Time: 0 minutes 00:07:50.817 Power Cycles: 0 00:07:50.817 Power On Hours: 0 hours 00:07:50.817 Unsafe Shutdowns: 0 00:07:50.817 Unrecoverable Media Errors: 0 00:07:50.817 Lifetime Error Log Entries: 0 00:07:50.817 Warning Temperature Time: 0 minutes 00:07:50.817 Critical Temperature Time: 0 minutes 00:07:50.817 00:07:50.817 Number of Queues 00:07:50.817 ================ 00:07:50.817 Number of I/O Submission Queues: 64 00:07:50.817 Number of I/O Completion Queues: 64 00:07:50.817 00:07:50.817 ZNS Specific Controller Data 00:07:50.817 ============================ 00:07:50.817 Zone Append Size Limit: 0 00:07:50.817 00:07:50.817 00:07:50.817 Active Namespaces 00:07:50.817 ================= 00:07:50.817 Namespace ID:1 00:07:50.817 Error Recovery Timeout: Unlimited 00:07:50.817 Command Set Identifier: NVM (00h) 00:07:50.817 Deallocate: Supported 00:07:50.817 Deallocated/Unwritten Error: Supported 00:07:50.817 Deallocated Read Value: All 0x00 00:07:50.817 Deallocate in Write Zeroes: Not Supported 00:07:50.817 Deallocated Guard Field: 0xFFFF 00:07:50.817 Flush: Supported 00:07:50.817 Reservation: Not Supported 00:07:50.817 Namespace Sharing Capabilities: Private 00:07:50.817 Size (in LBAs): 1310720 (5GiB) 00:07:50.817 Capacity (in LBAs): 1310720 (5GiB) 00:07:50.817 Utilization (in LBAs): 1310720 (5GiB) 00:07:50.817 Thin Provisioning: Not Supported 00:07:50.817 Per-NS Atomic Units: No 00:07:50.817 Maximum Single Source Range Length: 128 00:07:50.817 Maximum Copy Length: 128 00:07:50.817 Maximum Source Range Count: 128 00:07:50.817 NGUID/EUI64 Never Reused: No 00:07:50.817 Namespace Write Protected: No 00:07:50.817 Number of LBA Formats: 8 00:07:50.817 Current LBA Format: LBA Format #04 00:07:50.817 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:50.817 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:50.817 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:50.817 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:50.817 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:50.817 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:50.817 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:50.817 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:50.817 00:07:50.817 NVM Specific Namespace Data 00:07:50.817 =========================== 00:07:50.817 Logical Block Storage Tag Mask: 0 00:07:50.817 Protection Information Capabilities: 00:07:50.817 16b Guard Protection Information Storage Tag Support: No 00:07:50.817 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:50.817 Storage Tag Check Read Support: No 00:07:50.817 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:50.817 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:50.817 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:51.080 ===================================================== 00:07:51.080 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.080 ===================================================== 00:07:51.080 Controller Capabilities/Features 00:07:51.080 ================================ 00:07:51.080 Vendor ID: 1b36 00:07:51.080 Subsystem Vendor ID: 1af4 00:07:51.080 Serial Number: 12342 00:07:51.080 Model Number: QEMU NVMe Ctrl 00:07:51.080 Firmware Version: 8.0.0 00:07:51.080 Recommended Arb Burst: 6 00:07:51.080 IEEE OUI Identifier: 00 54 52 00:07:51.080 Multi-path I/O 00:07:51.080 May have multiple subsystem ports: No 00:07:51.080 May have multiple controllers: No 00:07:51.080 Associated with SR-IOV VF: No 00:07:51.080 Max Data Transfer Size: 524288 00:07:51.080 Max Number of Namespaces: 256 00:07:51.080 Max Number of I/O Queues: 64 00:07:51.080 NVMe Specification Version (VS): 1.4 00:07:51.080 NVMe Specification Version (Identify): 1.4 00:07:51.080 Maximum Queue Entries: 2048 00:07:51.080 Contiguous Queues Required: Yes 00:07:51.080 Arbitration Mechanisms Supported 00:07:51.080 Weighted Round Robin: Not Supported 00:07:51.080 Vendor Specific: Not Supported 00:07:51.080 Reset Timeout: 7500 ms 00:07:51.080 Doorbell Stride: 4 bytes 00:07:51.080 NVM Subsystem Reset: Not Supported 00:07:51.080 Command Sets Supported 00:07:51.080 NVM Command Set: Supported 00:07:51.080 Boot Partition: Not Supported 00:07:51.080 Memory Page Size Minimum: 4096 bytes 00:07:51.080 Memory Page Size Maximum: 65536 bytes 00:07:51.080 Persistent Memory Region: Not Supported 00:07:51.080 Optional Asynchronous Events Supported 00:07:51.080 Namespace Attribute Notices: Supported 00:07:51.080 Firmware Activation Notices: Not Supported 00:07:51.080 ANA Change Notices: Not Supported 00:07:51.080 PLE Aggregate Log Change Notices: Not Supported 00:07:51.080 LBA Status Info Alert Notices: Not Supported 00:07:51.080 EGE Aggregate Log Change Notices: Not Supported 00:07:51.080 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.080 Zone Descriptor Change Notices: Not Supported 00:07:51.080 Discovery Log Change Notices: Not Supported 00:07:51.080 Controller Attributes 00:07:51.080 128-bit Host Identifier: Not Supported 00:07:51.080 Non-Operational Permissive Mode: Not Supported 00:07:51.080 NVM Sets: Not Supported 00:07:51.080 Read Recovery Levels: Not Supported 00:07:51.080 Endurance Groups: Not Supported 00:07:51.080 Predictable Latency Mode: Not Supported 00:07:51.080 Traffic Based Keep ALive: Not Supported 00:07:51.080 Namespace Granularity: Not Supported 00:07:51.080 SQ Associations: Not Supported 00:07:51.080 UUID List: Not Supported 00:07:51.080 Multi-Domain Subsystem: Not Supported 00:07:51.080 Fixed Capacity Management: Not Supported 00:07:51.080 Variable Capacity Management: Not Supported 00:07:51.080 Delete Endurance Group: Not Supported 00:07:51.080 Delete NVM Set: Not Supported 00:07:51.080 Extended LBA Formats Supported: Supported 00:07:51.080 Flexible Data Placement Supported: Not Supported 00:07:51.080 00:07:51.080 Controller Memory Buffer Support 00:07:51.080 ================================ 00:07:51.080 Supported: No 00:07:51.080 00:07:51.080 Persistent Memory Region Support 00:07:51.080 ================================ 00:07:51.080 Supported: No 00:07:51.080 00:07:51.080 Admin Command Set Attributes 00:07:51.080 ============================ 00:07:51.080 Security Send/Receive: Not Supported 00:07:51.080 Format NVM: Supported 00:07:51.080 Firmware Activate/Download: Not Supported 00:07:51.080 Namespace Management: Supported 00:07:51.080 Device Self-Test: Not Supported 00:07:51.080 Directives: Supported 00:07:51.080 NVMe-MI: Not Supported 00:07:51.080 Virtualization Management: Not Supported 00:07:51.080 Doorbell Buffer Config: Supported 00:07:51.080 Get LBA Status Capability: Not Supported 00:07:51.080 Command & Feature Lockdown Capability: Not Supported 00:07:51.080 Abort Command Limit: 4 00:07:51.080 Async Event Request Limit: 4 00:07:51.080 Number of Firmware Slots: N/A 00:07:51.080 Firmware Slot 1 Read-Only: N/A 00:07:51.080 Firmware Activation Without Reset: N/A 00:07:51.080 Multiple Update Detection Support: N/A 00:07:51.080 Firmware Update Granularity: No Information Provided 00:07:51.080 Per-Namespace SMART Log: Yes 00:07:51.080 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.080 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:51.080 Command Effects Log Page: Supported 00:07:51.081 Get Log Page Extended Data: Supported 00:07:51.081 Telemetry Log Pages: Not Supported 00:07:51.081 Persistent Event Log Pages: Not Supported 00:07:51.081 Supported Log Pages Log Page: May Support 00:07:51.081 Commands Supported & Effects Log Page: Not Supported 00:07:51.081 Feature Identifiers & Effects Log Page:May Support 00:07:51.081 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.081 Data Area 4 for Telemetry Log: Not Supported 00:07:51.081 Error Log Page Entries Supported: 1 00:07:51.081 Keep Alive: Not Supported 00:07:51.081 00:07:51.081 NVM Command Set Attributes 00:07:51.081 ========================== 00:07:51.081 Submission Queue Entry Size 00:07:51.081 Max: 64 00:07:51.081 Min: 64 00:07:51.081 Completion Queue Entry Size 00:07:51.081 Max: 16 00:07:51.081 Min: 16 00:07:51.081 Number of Namespaces: 256 00:07:51.081 Compare Command: Supported 00:07:51.081 Write Uncorrectable Command: Not Supported 00:07:51.081 Dataset Management Command: Supported 00:07:51.081 Write Zeroes Command: Supported 00:07:51.081 Set Features Save Field: Supported 00:07:51.081 Reservations: Not Supported 00:07:51.081 Timestamp: Supported 00:07:51.081 Copy: Supported 00:07:51.081 Volatile Write Cache: Present 00:07:51.081 Atomic Write Unit (Normal): 1 00:07:51.081 Atomic Write Unit (PFail): 1 00:07:51.081 Atomic Compare & Write Unit: 1 00:07:51.081 Fused Compare & Write: Not Supported 00:07:51.081 Scatter-Gather List 00:07:51.081 SGL Command Set: Supported 00:07:51.081 SGL Keyed: Not Supported 00:07:51.081 SGL Bit Bucket Descriptor: Not Supported 00:07:51.081 SGL Metadata Pointer: Not Supported 00:07:51.081 Oversized SGL: Not Supported 00:07:51.081 SGL Metadata Address: Not Supported 00:07:51.081 SGL Offset: Not Supported 00:07:51.081 Transport SGL Data Block: Not Supported 00:07:51.081 Replay Protected Memory Block: Not Supported 00:07:51.081 00:07:51.081 Firmware Slot Information 00:07:51.081 ========================= 00:07:51.081 Active slot: 1 00:07:51.081 Slot 1 Firmware Revision: 1.0 00:07:51.081 00:07:51.081 00:07:51.081 Commands Supported and Effects 00:07:51.081 ============================== 00:07:51.081 Admin Commands 00:07:51.081 -------------- 00:07:51.081 Delete I/O Submission Queue (00h): Supported 00:07:51.081 Create I/O Submission Queue (01h): Supported 00:07:51.081 Get Log Page (02h): Supported 00:07:51.081 Delete I/O Completion Queue (04h): Supported 00:07:51.081 Create I/O Completion Queue (05h): Supported 00:07:51.081 Identify (06h): Supported 00:07:51.081 Abort (08h): Supported 00:07:51.081 Set Features (09h): Supported 00:07:51.081 Get Features (0Ah): Supported 00:07:51.081 Asynchronous Event Request (0Ch): Supported 00:07:51.081 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.081 Directive Send (19h): Supported 00:07:51.081 Directive Receive (1Ah): Supported 00:07:51.081 Virtualization Management (1Ch): Supported 00:07:51.081 Doorbell Buffer Config (7Ch): Supported 00:07:51.081 Format NVM (80h): Supported LBA-Change 00:07:51.081 I/O Commands 00:07:51.081 ------------ 00:07:51.081 Flush (00h): Supported LBA-Change 00:07:51.081 Write (01h): Supported LBA-Change 00:07:51.081 Read (02h): Supported 00:07:51.081 Compare (05h): Supported 00:07:51.081 Write Zeroes (08h): Supported LBA-Change 00:07:51.081 Dataset Management (09h): Supported LBA-Change 00:07:51.081 Unknown (0Ch): Supported 00:07:51.081 Unknown (12h): Supported 00:07:51.081 Copy (19h): Supported LBA-Change 00:07:51.081 Unknown (1Dh): Supported LBA-Change 00:07:51.081 00:07:51.081 Error Log 00:07:51.081 ========= 00:07:51.081 00:07:51.081 Arbitration 00:07:51.081 =========== 00:07:51.081 Arbitration Burst: no limit 00:07:51.081 00:07:51.081 Power Management 00:07:51.081 ================ 00:07:51.081 Number of Power States: 1 00:07:51.081 Current Power State: Power State #0 00:07:51.081 Power State #0: 00:07:51.081 Max Power: 25.00 W 00:07:51.081 Non-Operational State: Operational 00:07:51.081 Entry Latency: 16 microseconds 00:07:51.081 Exit Latency: 4 microseconds 00:07:51.081 Relative Read Throughput: 0 00:07:51.081 Relative Read Latency: 0 00:07:51.081 Relative Write Throughput: 0 00:07:51.081 Relative Write Latency: 0 00:07:51.081 Idle Power: Not Reported 00:07:51.081 Active Power: Not Reported 00:07:51.081 Non-Operational Permissive Mode: Not Supported 00:07:51.081 00:07:51.081 Health Information 00:07:51.081 ================== 00:07:51.081 Critical Warnings: 00:07:51.081 Available Spare Space: OK 00:07:51.081 Temperature: OK 00:07:51.081 Device Reliability: OK 00:07:51.081 Read Only: No 00:07:51.081 Volatile Memory Backup: OK 00:07:51.081 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.081 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.081 Available Spare: 0% 00:07:51.081 Available Spare Threshold: 0% 00:07:51.081 Life Percentage Used: 0% 00:07:51.081 Data Units Read: 1957 00:07:51.081 Data Units Written: 1745 00:07:51.081 Host Read Commands: 96833 00:07:51.081 Host Write Commands: 95102 00:07:51.081 Controller Busy Time: 0 minutes 00:07:51.081 Power Cycles: 0 00:07:51.081 Power On Hours: 0 hours 00:07:51.081 Unsafe Shutdowns: 0 00:07:51.081 Unrecoverable Media Errors: 0 00:07:51.081 Lifetime Error Log Entries: 0 00:07:51.081 Warning Temperature Time: 0 minutes 00:07:51.081 Critical Temperature Time: 0 minutes 00:07:51.081 00:07:51.081 Number of Queues 00:07:51.081 ================ 00:07:51.081 Number of I/O Submission Queues: 64 00:07:51.081 Number of I/O Completion Queues: 64 00:07:51.081 00:07:51.081 ZNS Specific Controller Data 00:07:51.081 ============================ 00:07:51.081 Zone Append Size Limit: 0 00:07:51.081 00:07:51.081 00:07:51.081 Active Namespaces 00:07:51.081 ================= 00:07:51.081 Namespace ID:1 00:07:51.081 Error Recovery Timeout: Unlimited 00:07:51.081 Command Set Identifier: NVM (00h) 00:07:51.081 Deallocate: Supported 00:07:51.081 Deallocated/Unwritten Error: Supported 00:07:51.081 Deallocated Read Value: All 0x00 00:07:51.081 Deallocate in Write Zeroes: Not Supported 00:07:51.081 Deallocated Guard Field: 0xFFFF 00:07:51.081 Flush: Supported 00:07:51.081 Reservation: Not Supported 00:07:51.081 Namespace Sharing Capabilities: Private 00:07:51.081 Size (in LBAs): 1048576 (4GiB) 00:07:51.082 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.082 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.082 Thin Provisioning: Not Supported 00:07:51.082 Per-NS Atomic Units: No 00:07:51.082 Maximum Single Source Range Length: 128 00:07:51.082 Maximum Copy Length: 128 00:07:51.082 Maximum Source Range Count: 128 00:07:51.082 NGUID/EUI64 Never Reused: No 00:07:51.082 Namespace Write Protected: No 00:07:51.082 Number of LBA Formats: 8 00:07:51.082 Current LBA Format: LBA Format #04 00:07:51.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.082 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.082 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.082 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.082 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.082 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.082 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.082 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.082 00:07:51.082 NVM Specific Namespace Data 00:07:51.082 =========================== 00:07:51.082 Logical Block Storage Tag Mask: 0 00:07:51.082 Protection Information Capabilities: 00:07:51.082 16b Guard Protection Information Storage Tag Support: No 00:07:51.082 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.082 Storage Tag Check Read Support: No 00:07:51.082 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Namespace ID:2 00:07:51.082 Error Recovery Timeout: Unlimited 00:07:51.082 Command Set Identifier: NVM (00h) 00:07:51.082 Deallocate: Supported 00:07:51.082 Deallocated/Unwritten Error: Supported 00:07:51.082 Deallocated Read Value: All 0x00 00:07:51.082 Deallocate in Write Zeroes: Not Supported 00:07:51.082 Deallocated Guard Field: 0xFFFF 00:07:51.082 Flush: Supported 00:07:51.082 Reservation: Not Supported 00:07:51.082 Namespace Sharing Capabilities: Private 00:07:51.082 Size (in LBAs): 1048576 (4GiB) 00:07:51.082 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.082 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.082 Thin Provisioning: Not Supported 00:07:51.082 Per-NS Atomic Units: No 00:07:51.082 Maximum Single Source Range Length: 128 00:07:51.082 Maximum Copy Length: 128 00:07:51.082 Maximum Source Range Count: 128 00:07:51.082 NGUID/EUI64 Never Reused: No 00:07:51.082 Namespace Write Protected: No 00:07:51.082 Number of LBA Formats: 8 00:07:51.082 Current LBA Format: LBA Format #04 00:07:51.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.082 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.082 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.082 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.082 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.082 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.082 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.082 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.082 00:07:51.082 NVM Specific Namespace Data 00:07:51.082 =========================== 00:07:51.082 Logical Block Storage Tag Mask: 0 00:07:51.082 Protection Information Capabilities: 00:07:51.082 16b Guard Protection Information Storage Tag Support: No 00:07:51.082 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.082 Storage Tag Check Read Support: No 00:07:51.082 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.082 Namespace ID:3 00:07:51.082 Error Recovery Timeout: Unlimited 00:07:51.082 Command Set Identifier: NVM (00h) 00:07:51.082 Deallocate: Supported 00:07:51.082 Deallocated/Unwritten Error: Supported 00:07:51.082 Deallocated Read Value: All 0x00 00:07:51.082 Deallocate in Write Zeroes: Not Supported 00:07:51.082 Deallocated Guard Field: 0xFFFF 00:07:51.082 Flush: Supported 00:07:51.082 Reservation: Not Supported 00:07:51.082 Namespace Sharing Capabilities: Private 00:07:51.082 Size (in LBAs): 1048576 (4GiB) 00:07:51.082 Capacity (in LBAs): 1048576 (4GiB) 00:07:51.082 Utilization (in LBAs): 1048576 (4GiB) 00:07:51.082 Thin Provisioning: Not Supported 00:07:51.082 Per-NS Atomic Units: No 00:07:51.082 Maximum Single Source Range Length: 128 00:07:51.082 Maximum Copy Length: 128 00:07:51.082 Maximum Source Range Count: 128 00:07:51.082 NGUID/EUI64 Never Reused: No 00:07:51.082 Namespace Write Protected: No 00:07:51.082 Number of LBA Formats: 8 00:07:51.082 Current LBA Format: LBA Format #04 00:07:51.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.082 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.082 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.082 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.082 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.082 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.082 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.082 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.082 00:07:51.082 NVM Specific Namespace Data 00:07:51.082 =========================== 00:07:51.082 Logical Block Storage Tag Mask: 0 00:07:51.082 Protection Information Capabilities: 00:07:51.082 16b Guard Protection Information Storage Tag Support: No 00:07:51.082 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.082 Storage Tag Check Read Support: No 00:07:51.082 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.083 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:51.083 09:01:29 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:51.341 ===================================================== 00:07:51.341 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.341 ===================================================== 00:07:51.341 Controller Capabilities/Features 00:07:51.341 ================================ 00:07:51.341 Vendor ID: 1b36 00:07:51.341 Subsystem Vendor ID: 1af4 00:07:51.341 Serial Number: 12343 00:07:51.341 Model Number: QEMU NVMe Ctrl 00:07:51.341 Firmware Version: 8.0.0 00:07:51.341 Recommended Arb Burst: 6 00:07:51.341 IEEE OUI Identifier: 00 54 52 00:07:51.341 Multi-path I/O 00:07:51.341 May have multiple subsystem ports: No 00:07:51.341 May have multiple controllers: Yes 00:07:51.341 Associated with SR-IOV VF: No 00:07:51.341 Max Data Transfer Size: 524288 00:07:51.341 Max Number of Namespaces: 256 00:07:51.341 Max Number of I/O Queues: 64 00:07:51.341 NVMe Specification Version (VS): 1.4 00:07:51.341 NVMe Specification Version (Identify): 1.4 00:07:51.341 Maximum Queue Entries: 2048 00:07:51.341 Contiguous Queues Required: Yes 00:07:51.341 Arbitration Mechanisms Supported 00:07:51.341 Weighted Round Robin: Not Supported 00:07:51.341 Vendor Specific: Not Supported 00:07:51.341 Reset Timeout: 7500 ms 00:07:51.341 Doorbell Stride: 4 bytes 00:07:51.341 NVM Subsystem Reset: Not Supported 00:07:51.341 Command Sets Supported 00:07:51.341 NVM Command Set: Supported 00:07:51.341 Boot Partition: Not Supported 00:07:51.341 Memory Page Size Minimum: 4096 bytes 00:07:51.341 Memory Page Size Maximum: 65536 bytes 00:07:51.341 Persistent Memory Region: Not Supported 00:07:51.341 Optional Asynchronous Events Supported 00:07:51.341 Namespace Attribute Notices: Supported 00:07:51.341 Firmware Activation Notices: Not Supported 00:07:51.341 ANA Change Notices: Not Supported 00:07:51.341 PLE Aggregate Log Change Notices: Not Supported 00:07:51.341 LBA Status Info Alert Notices: Not Supported 00:07:51.341 EGE Aggregate Log Change Notices: Not Supported 00:07:51.341 Normal NVM Subsystem Shutdown event: Not Supported 00:07:51.341 Zone Descriptor Change Notices: Not Supported 00:07:51.341 Discovery Log Change Notices: Not Supported 00:07:51.341 Controller Attributes 00:07:51.341 128-bit Host Identifier: Not Supported 00:07:51.341 Non-Operational Permissive Mode: Not Supported 00:07:51.341 NVM Sets: Not Supported 00:07:51.341 Read Recovery Levels: Not Supported 00:07:51.341 Endurance Groups: Supported 00:07:51.341 Predictable Latency Mode: Not Supported 00:07:51.341 Traffic Based Keep ALive: Not Supported 00:07:51.341 Namespace Granularity: Not Supported 00:07:51.341 SQ Associations: Not Supported 00:07:51.341 UUID List: Not Supported 00:07:51.341 Multi-Domain Subsystem: Not Supported 00:07:51.341 Fixed Capacity Management: Not Supported 00:07:51.341 Variable Capacity Management: Not Supported 00:07:51.341 Delete Endurance Group: Not Supported 00:07:51.341 Delete NVM Set: Not Supported 00:07:51.341 Extended LBA Formats Supported: Supported 00:07:51.341 Flexible Data Placement Supported: Supported 00:07:51.341 00:07:51.341 Controller Memory Buffer Support 00:07:51.341 ================================ 00:07:51.341 Supported: No 00:07:51.341 00:07:51.341 Persistent Memory Region Support 00:07:51.341 ================================ 00:07:51.341 Supported: No 00:07:51.341 00:07:51.341 Admin Command Set Attributes 00:07:51.341 ============================ 00:07:51.341 Security Send/Receive: Not Supported 00:07:51.341 Format NVM: Supported 00:07:51.341 Firmware Activate/Download: Not Supported 00:07:51.341 Namespace Management: Supported 00:07:51.341 Device Self-Test: Not Supported 00:07:51.341 Directives: Supported 00:07:51.341 NVMe-MI: Not Supported 00:07:51.341 Virtualization Management: Not Supported 00:07:51.341 Doorbell Buffer Config: Supported 00:07:51.341 Get LBA Status Capability: Not Supported 00:07:51.341 Command & Feature Lockdown Capability: Not Supported 00:07:51.341 Abort Command Limit: 4 00:07:51.341 Async Event Request Limit: 4 00:07:51.341 Number of Firmware Slots: N/A 00:07:51.341 Firmware Slot 1 Read-Only: N/A 00:07:51.341 Firmware Activation Without Reset: N/A 00:07:51.341 Multiple Update Detection Support: N/A 00:07:51.341 Firmware Update Granularity: No Information Provided 00:07:51.342 Per-Namespace SMART Log: Yes 00:07:51.342 Asymmetric Namespace Access Log Page: Not Supported 00:07:51.342 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:51.342 Command Effects Log Page: Supported 00:07:51.342 Get Log Page Extended Data: Supported 00:07:51.342 Telemetry Log Pages: Not Supported 00:07:51.342 Persistent Event Log Pages: Not Supported 00:07:51.342 Supported Log Pages Log Page: May Support 00:07:51.342 Commands Supported & Effects Log Page: Not Supported 00:07:51.342 Feature Identifiers & Effects Log Page:May Support 00:07:51.342 NVMe-MI Commands & Effects Log Page: May Support 00:07:51.342 Data Area 4 for Telemetry Log: Not Supported 00:07:51.342 Error Log Page Entries Supported: 1 00:07:51.342 Keep Alive: Not Supported 00:07:51.342 00:07:51.342 NVM Command Set Attributes 00:07:51.342 ========================== 00:07:51.342 Submission Queue Entry Size 00:07:51.342 Max: 64 00:07:51.342 Min: 64 00:07:51.342 Completion Queue Entry Size 00:07:51.342 Max: 16 00:07:51.342 Min: 16 00:07:51.342 Number of Namespaces: 256 00:07:51.342 Compare Command: Supported 00:07:51.342 Write Uncorrectable Command: Not Supported 00:07:51.342 Dataset Management Command: Supported 00:07:51.342 Write Zeroes Command: Supported 00:07:51.342 Set Features Save Field: Supported 00:07:51.342 Reservations: Not Supported 00:07:51.342 Timestamp: Supported 00:07:51.342 Copy: Supported 00:07:51.342 Volatile Write Cache: Present 00:07:51.342 Atomic Write Unit (Normal): 1 00:07:51.342 Atomic Write Unit (PFail): 1 00:07:51.342 Atomic Compare & Write Unit: 1 00:07:51.342 Fused Compare & Write: Not Supported 00:07:51.342 Scatter-Gather List 00:07:51.342 SGL Command Set: Supported 00:07:51.342 SGL Keyed: Not Supported 00:07:51.342 SGL Bit Bucket Descriptor: Not Supported 00:07:51.342 SGL Metadata Pointer: Not Supported 00:07:51.342 Oversized SGL: Not Supported 00:07:51.342 SGL Metadata Address: Not Supported 00:07:51.342 SGL Offset: Not Supported 00:07:51.342 Transport SGL Data Block: Not Supported 00:07:51.342 Replay Protected Memory Block: Not Supported 00:07:51.342 00:07:51.342 Firmware Slot Information 00:07:51.342 ========================= 00:07:51.342 Active slot: 1 00:07:51.342 Slot 1 Firmware Revision: 1.0 00:07:51.342 00:07:51.342 00:07:51.342 Commands Supported and Effects 00:07:51.342 ============================== 00:07:51.342 Admin Commands 00:07:51.342 -------------- 00:07:51.342 Delete I/O Submission Queue (00h): Supported 00:07:51.342 Create I/O Submission Queue (01h): Supported 00:07:51.342 Get Log Page (02h): Supported 00:07:51.342 Delete I/O Completion Queue (04h): Supported 00:07:51.342 Create I/O Completion Queue (05h): Supported 00:07:51.342 Identify (06h): Supported 00:07:51.342 Abort (08h): Supported 00:07:51.342 Set Features (09h): Supported 00:07:51.342 Get Features (0Ah): Supported 00:07:51.342 Asynchronous Event Request (0Ch): Supported 00:07:51.342 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:51.342 Directive Send (19h): Supported 00:07:51.342 Directive Receive (1Ah): Supported 00:07:51.342 Virtualization Management (1Ch): Supported 00:07:51.342 Doorbell Buffer Config (7Ch): Supported 00:07:51.342 Format NVM (80h): Supported LBA-Change 00:07:51.342 I/O Commands 00:07:51.342 ------------ 00:07:51.342 Flush (00h): Supported LBA-Change 00:07:51.342 Write (01h): Supported LBA-Change 00:07:51.342 Read (02h): Supported 00:07:51.342 Compare (05h): Supported 00:07:51.342 Write Zeroes (08h): Supported LBA-Change 00:07:51.342 Dataset Management (09h): Supported LBA-Change 00:07:51.342 Unknown (0Ch): Supported 00:07:51.342 Unknown (12h): Supported 00:07:51.342 Copy (19h): Supported LBA-Change 00:07:51.342 Unknown (1Dh): Supported LBA-Change 00:07:51.342 00:07:51.342 Error Log 00:07:51.342 ========= 00:07:51.342 00:07:51.342 Arbitration 00:07:51.342 =========== 00:07:51.342 Arbitration Burst: no limit 00:07:51.342 00:07:51.342 Power Management 00:07:51.342 ================ 00:07:51.342 Number of Power States: 1 00:07:51.342 Current Power State: Power State #0 00:07:51.342 Power State #0: 00:07:51.342 Max Power: 25.00 W 00:07:51.342 Non-Operational State: Operational 00:07:51.342 Entry Latency: 16 microseconds 00:07:51.342 Exit Latency: 4 microseconds 00:07:51.342 Relative Read Throughput: 0 00:07:51.342 Relative Read Latency: 0 00:07:51.342 Relative Write Throughput: 0 00:07:51.342 Relative Write Latency: 0 00:07:51.342 Idle Power: Not Reported 00:07:51.342 Active Power: Not Reported 00:07:51.342 Non-Operational Permissive Mode: Not Supported 00:07:51.342 00:07:51.342 Health Information 00:07:51.342 ================== 00:07:51.342 Critical Warnings: 00:07:51.342 Available Spare Space: OK 00:07:51.342 Temperature: OK 00:07:51.342 Device Reliability: OK 00:07:51.342 Read Only: No 00:07:51.342 Volatile Memory Backup: OK 00:07:51.342 Current Temperature: 323 Kelvin (50 Celsius) 00:07:51.342 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:51.342 Available Spare: 0% 00:07:51.342 Available Spare Threshold: 0% 00:07:51.342 Life Percentage Used: 0% 00:07:51.342 Data Units Read: 713 00:07:51.342 Data Units Written: 642 00:07:51.342 Host Read Commands: 32826 00:07:51.342 Host Write Commands: 32252 00:07:51.342 Controller Busy Time: 0 minutes 00:07:51.342 Power Cycles: 0 00:07:51.342 Power On Hours: 0 hours 00:07:51.342 Unsafe Shutdowns: 0 00:07:51.342 Unrecoverable Media Errors: 0 00:07:51.342 Lifetime Error Log Entries: 0 00:07:51.342 Warning Temperature Time: 0 minutes 00:07:51.342 Critical Temperature Time: 0 minutes 00:07:51.342 00:07:51.342 Number of Queues 00:07:51.342 ================ 00:07:51.342 Number of I/O Submission Queues: 64 00:07:51.342 Number of I/O Completion Queues: 64 00:07:51.342 00:07:51.342 ZNS Specific Controller Data 00:07:51.342 ============================ 00:07:51.342 Zone Append Size Limit: 0 00:07:51.342 00:07:51.342 00:07:51.342 Active Namespaces 00:07:51.342 ================= 00:07:51.342 Namespace ID:1 00:07:51.342 Error Recovery Timeout: Unlimited 00:07:51.342 Command Set Identifier: NVM (00h) 00:07:51.342 Deallocate: Supported 00:07:51.342 Deallocated/Unwritten Error: Supported 00:07:51.342 Deallocated Read Value: All 0x00 00:07:51.342 Deallocate in Write Zeroes: Not Supported 00:07:51.342 Deallocated Guard Field: 0xFFFF 00:07:51.342 Flush: Supported 00:07:51.342 Reservation: Not Supported 00:07:51.342 Namespace Sharing Capabilities: Multiple Controllers 00:07:51.342 Size (in LBAs): 262144 (1GiB) 00:07:51.342 Capacity (in LBAs): 262144 (1GiB) 00:07:51.342 Utilization (in LBAs): 262144 (1GiB) 00:07:51.342 Thin Provisioning: Not Supported 00:07:51.342 Per-NS Atomic Units: No 00:07:51.342 Maximum Single Source Range Length: 128 00:07:51.342 Maximum Copy Length: 128 00:07:51.342 Maximum Source Range Count: 128 00:07:51.342 NGUID/EUI64 Never Reused: No 00:07:51.342 Namespace Write Protected: No 00:07:51.342 Endurance group ID: 1 00:07:51.342 Number of LBA Formats: 8 00:07:51.342 Current LBA Format: LBA Format #04 00:07:51.342 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:51.342 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:51.342 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:51.342 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:51.342 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:51.342 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:51.342 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:51.342 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:51.342 00:07:51.342 Get Feature FDP: 00:07:51.342 ================ 00:07:51.342 Enabled: Yes 00:07:51.342 FDP configuration index: 0 00:07:51.342 00:07:51.342 FDP configurations log page 00:07:51.342 =========================== 00:07:51.342 Number of FDP configurations: 1 00:07:51.342 Version: 0 00:07:51.342 Size: 112 00:07:51.342 FDP Configuration Descriptor: 0 00:07:51.342 Descriptor Size: 96 00:07:51.342 Reclaim Group Identifier format: 2 00:07:51.342 FDP Volatile Write Cache: Not Present 00:07:51.342 FDP Configuration: Valid 00:07:51.342 Vendor Specific Size: 0 00:07:51.342 Number of Reclaim Groups: 2 00:07:51.342 Number of Recalim Unit Handles: 8 00:07:51.342 Max Placement Identifiers: 128 00:07:51.342 Number of Namespaces Suppprted: 256 00:07:51.342 Reclaim unit Nominal Size: 6000000 bytes 00:07:51.342 Estimated Reclaim Unit Time Limit: Not Reported 00:07:51.342 RUH Desc #000: RUH Type: Initially Isolated 00:07:51.342 RUH Desc #001: RUH Type: Initially Isolated 00:07:51.342 RUH Desc #002: RUH Type: Initially Isolated 00:07:51.343 RUH Desc #003: RUH Type: Initially Isolated 00:07:51.343 RUH Desc #004: RUH Type: Initially Isolated 00:07:51.343 RUH Desc #005: RUH Type: Initially Isolated 00:07:51.343 RUH Desc #006: RUH Type: Initially Isolated 00:07:51.343 RUH Desc #007: RUH Type: Initially Isolated 00:07:51.343 00:07:51.343 FDP reclaim unit handle usage log page 00:07:51.343 ====================================== 00:07:51.343 Number of Reclaim Unit Handles: 8 00:07:51.343 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:51.343 RUH Usage Desc #001: RUH Attributes: Unused 00:07:51.343 RUH Usage Desc #002: RUH Attributes: Unused 00:07:51.343 RUH Usage Desc #003: RUH Attributes: Unused 00:07:51.343 RUH Usage Desc #004: RUH Attributes: Unused 00:07:51.343 RUH Usage Desc #005: RUH Attributes: Unused 00:07:51.343 RUH Usage Desc #006: RUH Attributes: Unused 00:07:51.343 RUH Usage Desc #007: RUH Attributes: Unused 00:07:51.343 00:07:51.343 FDP statistics log page 00:07:51.343 ======================= 00:07:51.343 Host bytes with metadata written: 385724416 00:07:51.343 Media bytes with metadata written: 385765376 00:07:51.343 Media bytes erased: 0 00:07:51.343 00:07:51.343 FDP events log page 00:07:51.343 =================== 00:07:51.343 Number of FDP events: 0 00:07:51.343 00:07:51.343 NVM Specific Namespace Data 00:07:51.343 =========================== 00:07:51.343 Logical Block Storage Tag Mask: 0 00:07:51.343 Protection Information Capabilities: 00:07:51.343 16b Guard Protection Information Storage Tag Support: No 00:07:51.343 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:51.343 Storage Tag Check Read Support: No 00:07:51.343 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:51.343 00:07:51.343 real 0m1.274s 00:07:51.343 user 0m0.453s 00:07:51.343 sys 0m0.591s 00:07:51.343 09:01:30 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.343 09:01:30 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:51.343 ************************************ 00:07:51.343 END TEST nvme_identify 00:07:51.343 ************************************ 00:07:51.343 09:01:30 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:51.343 09:01:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.343 09:01:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.343 09:01:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.343 ************************************ 00:07:51.343 START TEST nvme_perf 00:07:51.343 ************************************ 00:07:51.343 09:01:30 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:51.343 09:01:30 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:52.722 Initializing NVMe Controllers 00:07:52.722 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:52.722 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:52.722 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:52.722 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:52.722 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:52.722 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:52.722 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:52.722 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:52.722 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:52.722 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:52.722 Initialization complete. Launching workers. 00:07:52.722 ======================================================== 00:07:52.722 Latency(us) 00:07:52.722 Device Information : IOPS MiB/s Average min max 00:07:52.722 PCIE (0000:00:10.0) NSID 1 from core 0: 14872.57 174.29 8628.26 5727.34 28346.38 00:07:52.722 PCIE (0000:00:11.0) NSID 1 from core 0: 14872.57 174.29 8618.30 5784.02 26677.69 00:07:52.722 PCIE (0000:00:13.0) NSID 1 from core 0: 14872.57 174.29 8607.82 5814.58 25561.22 00:07:52.722 PCIE (0000:00:12.0) NSID 1 from core 0: 14872.57 174.29 8596.78 5806.82 23863.11 00:07:52.722 PCIE (0000:00:12.0) NSID 2 from core 0: 14872.57 174.29 8586.45 5835.49 22344.22 00:07:52.722 PCIE (0000:00:12.0) NSID 3 from core 0: 14872.57 174.29 8576.24 5834.59 20722.89 00:07:52.722 ======================================================== 00:07:52.722 Total : 89235.44 1045.73 8602.31 5727.34 28346.38 00:07:52.722 00:07:52.722 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.722 ================================================================================= 00:07:52.722 1.00000% : 5948.652us 00:07:52.722 10.00000% : 6301.538us 00:07:52.722 25.00000% : 6704.837us 00:07:52.722 50.00000% : 7813.908us 00:07:52.722 75.00000% : 9830.400us 00:07:52.722 90.00000% : 12048.542us 00:07:52.722 95.00000% : 13913.797us 00:07:52.722 98.00000% : 15728.640us 00:07:52.722 99.00000% : 18551.729us 00:07:52.722 99.50000% : 21475.643us 00:07:52.722 99.90000% : 28029.243us 00:07:52.722 99.99000% : 28432.542us 00:07:52.722 99.99900% : 28432.542us 00:07:52.722 99.99990% : 28432.542us 00:07:52.722 99.99999% : 28432.542us 00:07:52.722 00:07:52.722 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.722 ================================================================================= 00:07:52.722 1.00000% : 5999.065us 00:07:52.722 10.00000% : 6326.745us 00:07:52.722 25.00000% : 6704.837us 00:07:52.722 50.00000% : 7914.732us 00:07:52.722 75.00000% : 9779.988us 00:07:52.722 90.00000% : 12149.366us 00:07:52.722 95.00000% : 14216.271us 00:07:52.722 98.00000% : 15123.692us 00:07:52.722 99.00000% : 18753.378us 00:07:52.722 99.50000% : 20265.748us 00:07:52.722 99.90000% : 26416.049us 00:07:52.722 99.99000% : 26819.348us 00:07:52.722 99.99900% : 26819.348us 00:07:52.722 99.99990% : 26819.348us 00:07:52.722 99.99999% : 26819.348us 00:07:52.722 00:07:52.722 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.722 ================================================================================= 00:07:52.722 1.00000% : 6024.271us 00:07:52.722 10.00000% : 6326.745us 00:07:52.722 25.00000% : 6704.837us 00:07:52.722 50.00000% : 7914.732us 00:07:52.722 75.00000% : 9830.400us 00:07:52.722 90.00000% : 12250.191us 00:07:52.722 95.00000% : 13812.972us 00:07:52.722 98.00000% : 15325.342us 00:07:52.722 99.00000% : 18350.080us 00:07:52.722 99.50000% : 19257.502us 00:07:52.722 99.90000% : 25206.154us 00:07:52.722 99.99000% : 25609.452us 00:07:52.722 99.99900% : 25609.452us 00:07:52.722 99.99990% : 25609.452us 00:07:52.722 99.99999% : 25609.452us 00:07:52.722 00:07:52.722 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.722 ================================================================================= 00:07:52.722 1.00000% : 6024.271us 00:07:52.722 10.00000% : 6326.745us 00:07:52.722 25.00000% : 6654.425us 00:07:52.722 50.00000% : 7965.145us 00:07:52.722 75.00000% : 9779.988us 00:07:52.722 90.00000% : 12199.778us 00:07:52.722 95.00000% : 13812.972us 00:07:52.722 98.00000% : 15426.166us 00:07:52.722 99.00000% : 17241.009us 00:07:52.722 99.50000% : 19156.677us 00:07:52.722 99.90000% : 23492.135us 00:07:52.722 99.99000% : 23895.434us 00:07:52.722 99.99900% : 23895.434us 00:07:52.722 99.99990% : 23895.434us 00:07:52.722 99.99999% : 23895.434us 00:07:52.722 00:07:52.722 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.722 ================================================================================= 00:07:52.722 1.00000% : 6024.271us 00:07:52.722 10.00000% : 6326.745us 00:07:52.722 25.00000% : 6704.837us 00:07:52.722 50.00000% : 7914.732us 00:07:52.722 75.00000% : 9729.575us 00:07:52.722 90.00000% : 12149.366us 00:07:52.722 95.00000% : 13913.797us 00:07:52.722 98.00000% : 15325.342us 00:07:52.722 99.00000% : 17140.185us 00:07:52.722 99.50000% : 18551.729us 00:07:52.722 99.90000% : 21979.766us 00:07:52.722 99.99000% : 22383.065us 00:07:52.722 99.99900% : 22383.065us 00:07:52.722 99.99990% : 22383.065us 00:07:52.722 99.99999% : 22383.065us 00:07:52.722 00:07:52.722 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.722 ================================================================================= 00:07:52.722 1.00000% : 6024.271us 00:07:52.722 10.00000% : 6326.745us 00:07:52.722 25.00000% : 6704.837us 00:07:52.722 50.00000% : 7864.320us 00:07:52.722 75.00000% : 9779.988us 00:07:52.722 90.00000% : 12098.954us 00:07:52.722 95.00000% : 13812.972us 00:07:52.722 98.00000% : 15123.692us 00:07:52.722 99.00000% : 17341.834us 00:07:52.722 99.50000% : 18047.606us 00:07:52.722 99.90000% : 20366.572us 00:07:52.722 99.99000% : 20769.871us 00:07:52.722 99.99900% : 20769.871us 00:07:52.722 99.99990% : 20769.871us 00:07:52.722 99.99999% : 20769.871us 00:07:52.722 00:07:52.722 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:52.723 ============================================================================== 00:07:52.723 Range in us Cumulative IO count 00:07:52.723 5721.797 - 5747.003: 0.0067% ( 1) 00:07:52.723 5747.003 - 5772.209: 0.0335% ( 4) 00:07:52.723 5772.209 - 5797.415: 0.0671% ( 5) 00:07:52.723 5797.415 - 5822.622: 0.1408% ( 11) 00:07:52.723 5822.622 - 5847.828: 0.2548% ( 17) 00:07:52.723 5847.828 - 5873.034: 0.4024% ( 22) 00:07:52.723 5873.034 - 5898.240: 0.6773% ( 41) 00:07:52.723 5898.240 - 5923.446: 0.8785% ( 30) 00:07:52.723 5923.446 - 5948.652: 1.2004% ( 48) 00:07:52.723 5948.652 - 5973.858: 1.5223% ( 48) 00:07:52.723 5973.858 - 5999.065: 1.9313% ( 61) 00:07:52.723 5999.065 - 6024.271: 2.3806% ( 67) 00:07:52.723 6024.271 - 6049.477: 2.9573% ( 86) 00:07:52.723 6049.477 - 6074.683: 3.5207% ( 84) 00:07:52.723 6074.683 - 6099.889: 4.1510% ( 94) 00:07:52.723 6099.889 - 6125.095: 4.9624% ( 121) 00:07:52.723 6125.095 - 6150.302: 5.6196% ( 98) 00:07:52.723 6150.302 - 6175.508: 6.4512% ( 124) 00:07:52.723 6175.508 - 6200.714: 7.3498% ( 134) 00:07:52.723 6200.714 - 6225.920: 8.1947% ( 126) 00:07:52.723 6225.920 - 6251.126: 9.1269% ( 139) 00:07:52.723 6251.126 - 6276.332: 9.9249% ( 119) 00:07:52.723 6276.332 - 6301.538: 10.7430% ( 122) 00:07:52.723 6301.538 - 6326.745: 11.6282% ( 132) 00:07:52.723 6326.745 - 6351.951: 12.5671% ( 140) 00:07:52.723 6351.951 - 6377.157: 13.5059% ( 140) 00:07:52.723 6377.157 - 6402.363: 14.4179% ( 136) 00:07:52.723 6402.363 - 6427.569: 15.3702% ( 142) 00:07:52.723 6427.569 - 6452.775: 16.3023% ( 139) 00:07:52.723 6452.775 - 6503.188: 18.1599% ( 277) 00:07:52.723 6503.188 - 6553.600: 20.0443% ( 281) 00:07:52.723 6553.600 - 6604.012: 22.0292% ( 296) 00:07:52.723 6604.012 - 6654.425: 24.0008% ( 294) 00:07:52.723 6654.425 - 6704.837: 25.9455% ( 290) 00:07:52.723 6704.837 - 6755.249: 27.9842% ( 304) 00:07:52.723 6755.249 - 6805.662: 30.0764% ( 312) 00:07:52.723 6805.662 - 6856.074: 32.1553% ( 310) 00:07:52.723 6856.074 - 6906.486: 34.0598% ( 284) 00:07:52.723 6906.486 - 6956.898: 35.8235% ( 263) 00:07:52.723 6956.898 - 7007.311: 37.3793% ( 232) 00:07:52.723 7007.311 - 7057.723: 38.8144% ( 214) 00:07:52.723 7057.723 - 7108.135: 40.0013% ( 177) 00:07:52.723 7108.135 - 7158.548: 40.9402% ( 140) 00:07:52.723 7158.548 - 7208.960: 41.7986% ( 128) 00:07:52.723 7208.960 - 7259.372: 42.4893% ( 103) 00:07:52.723 7259.372 - 7309.785: 43.1330% ( 96) 00:07:52.723 7309.785 - 7360.197: 43.6829% ( 82) 00:07:52.723 7360.197 - 7410.609: 44.3401% ( 98) 00:07:52.723 7410.609 - 7461.022: 44.9705% ( 94) 00:07:52.723 7461.022 - 7511.434: 45.6210% ( 97) 00:07:52.723 7511.434 - 7561.846: 46.3452% ( 108) 00:07:52.723 7561.846 - 7612.258: 47.0427% ( 104) 00:07:52.723 7612.258 - 7662.671: 47.7669% ( 108) 00:07:52.723 7662.671 - 7713.083: 48.5247% ( 113) 00:07:52.723 7713.083 - 7763.495: 49.3495% ( 123) 00:07:52.723 7763.495 - 7813.908: 50.1006% ( 112) 00:07:52.723 7813.908 - 7864.320: 50.9455% ( 126) 00:07:52.723 7864.320 - 7914.732: 51.7905% ( 126) 00:07:52.723 7914.732 - 7965.145: 52.6086% ( 122) 00:07:52.723 7965.145 - 8015.557: 53.4335% ( 123) 00:07:52.723 8015.557 - 8065.969: 54.1041% ( 100) 00:07:52.723 8065.969 - 8116.382: 54.9692% ( 129) 00:07:52.723 8116.382 - 8166.794: 55.6532% ( 102) 00:07:52.723 8166.794 - 8217.206: 56.4109% ( 113) 00:07:52.723 8217.206 - 8267.618: 57.1754% ( 114) 00:07:52.723 8267.618 - 8318.031: 57.8661% ( 103) 00:07:52.723 8318.031 - 8368.443: 58.6105% ( 111) 00:07:52.723 8368.443 - 8418.855: 59.3817% ( 115) 00:07:52.723 8418.855 - 8469.268: 60.0724% ( 103) 00:07:52.723 8469.268 - 8519.680: 60.7296% ( 98) 00:07:52.723 8519.680 - 8570.092: 61.4337% ( 105) 00:07:52.723 8570.092 - 8620.505: 62.2049% ( 115) 00:07:52.723 8620.505 - 8670.917: 62.8554% ( 97) 00:07:52.723 8670.917 - 8721.329: 63.5730% ( 107) 00:07:52.723 8721.329 - 8771.742: 64.2436% ( 100) 00:07:52.723 8771.742 - 8822.154: 64.8136% ( 85) 00:07:52.723 8822.154 - 8872.566: 65.4305% ( 92) 00:07:52.723 8872.566 - 8922.978: 66.0072% ( 86) 00:07:52.723 8922.978 - 8973.391: 66.5638% ( 83) 00:07:52.723 8973.391 - 9023.803: 67.0198% ( 68) 00:07:52.723 9023.803 - 9074.215: 67.4826% ( 69) 00:07:52.723 9074.215 - 9124.628: 67.9654% ( 72) 00:07:52.723 9124.628 - 9175.040: 68.5019% ( 80) 00:07:52.723 9175.040 - 9225.452: 69.0518% ( 82) 00:07:52.723 9225.452 - 9275.865: 69.5547% ( 75) 00:07:52.723 9275.865 - 9326.277: 70.1314% ( 86) 00:07:52.723 9326.277 - 9376.689: 70.7082% ( 86) 00:07:52.723 9376.689 - 9427.102: 71.1373% ( 64) 00:07:52.723 9427.102 - 9477.514: 71.6939% ( 83) 00:07:52.723 9477.514 - 9527.926: 72.1768% ( 72) 00:07:52.723 9527.926 - 9578.338: 72.6998% ( 78) 00:07:52.723 9578.338 - 9628.751: 73.2162% ( 77) 00:07:52.723 9628.751 - 9679.163: 73.7057% ( 73) 00:07:52.723 9679.163 - 9729.575: 74.2221% ( 77) 00:07:52.723 9729.575 - 9779.988: 74.6245% ( 60) 00:07:52.723 9779.988 - 9830.400: 75.1744% ( 82) 00:07:52.723 9830.400 - 9880.812: 75.6639% ( 73) 00:07:52.723 9880.812 - 9931.225: 76.0998% ( 65) 00:07:52.723 9931.225 - 9981.637: 76.5424% ( 66) 00:07:52.723 9981.637 - 10032.049: 77.1057% ( 84) 00:07:52.723 10032.049 - 10082.462: 77.6153% ( 76) 00:07:52.723 10082.462 - 10132.874: 78.0915% ( 71) 00:07:52.723 10132.874 - 10183.286: 78.6011% ( 76) 00:07:52.723 10183.286 - 10233.698: 79.0504% ( 67) 00:07:52.723 10233.698 - 10284.111: 79.5467% ( 74) 00:07:52.723 10284.111 - 10334.523: 80.0027% ( 68) 00:07:52.723 10334.523 - 10384.935: 80.4721% ( 70) 00:07:52.723 10384.935 - 10435.348: 80.9080% ( 65) 00:07:52.723 10435.348 - 10485.760: 81.2902% ( 57) 00:07:52.723 10485.760 - 10536.172: 81.7664% ( 71) 00:07:52.723 10536.172 - 10586.585: 82.2023% ( 65) 00:07:52.723 10586.585 - 10636.997: 82.5711% ( 55) 00:07:52.723 10636.997 - 10687.409: 82.9734% ( 60) 00:07:52.723 10687.409 - 10737.822: 83.3557% ( 57) 00:07:52.723 10737.822 - 10788.234: 83.7044% ( 52) 00:07:52.723 10788.234 - 10838.646: 84.0397% ( 50) 00:07:52.723 10838.646 - 10889.058: 84.3146% ( 41) 00:07:52.723 10889.058 - 10939.471: 84.6298% ( 47) 00:07:52.723 10939.471 - 10989.883: 84.8645% ( 35) 00:07:52.723 10989.883 - 11040.295: 85.1395% ( 41) 00:07:52.723 11040.295 - 11090.708: 85.3876% ( 37) 00:07:52.723 11090.708 - 11141.120: 85.6290% ( 36) 00:07:52.723 11141.120 - 11191.532: 85.9174% ( 43) 00:07:52.723 11191.532 - 11241.945: 86.1454% ( 34) 00:07:52.723 11241.945 - 11292.357: 86.4203% ( 41) 00:07:52.723 11292.357 - 11342.769: 86.6819% ( 39) 00:07:52.723 11342.769 - 11393.182: 86.9568% ( 41) 00:07:52.723 11393.182 - 11443.594: 87.2653% ( 46) 00:07:52.723 11443.594 - 11494.006: 87.5939% ( 49) 00:07:52.723 11494.006 - 11544.418: 87.8286% ( 35) 00:07:52.723 11544.418 - 11594.831: 88.1102% ( 42) 00:07:52.723 11594.831 - 11645.243: 88.3517% ( 36) 00:07:52.724 11645.243 - 11695.655: 88.5864% ( 35) 00:07:52.724 11695.655 - 11746.068: 88.8211% ( 35) 00:07:52.724 11746.068 - 11796.480: 89.0558% ( 35) 00:07:52.724 11796.480 - 11846.892: 89.2771% ( 33) 00:07:52.724 11846.892 - 11897.305: 89.4783% ( 30) 00:07:52.724 11897.305 - 11947.717: 89.6593% ( 27) 00:07:52.724 11947.717 - 11998.129: 89.8538% ( 29) 00:07:52.724 11998.129 - 12048.542: 90.0282% ( 26) 00:07:52.724 12048.542 - 12098.954: 90.2830% ( 38) 00:07:52.724 12098.954 - 12149.366: 90.4305% ( 22) 00:07:52.724 12149.366 - 12199.778: 90.6786% ( 37) 00:07:52.724 12199.778 - 12250.191: 90.8731% ( 29) 00:07:52.724 12250.191 - 12300.603: 91.0676% ( 29) 00:07:52.724 12300.603 - 12351.015: 91.2621% ( 29) 00:07:52.724 12351.015 - 12401.428: 91.4565% ( 29) 00:07:52.724 12401.428 - 12451.840: 91.6711% ( 32) 00:07:52.724 12451.840 - 12502.252: 91.8522% ( 27) 00:07:52.724 12502.252 - 12552.665: 91.9796% ( 19) 00:07:52.724 12552.665 - 12603.077: 92.1406% ( 24) 00:07:52.724 12603.077 - 12653.489: 92.3015% ( 24) 00:07:52.724 12653.489 - 12703.902: 92.4289% ( 19) 00:07:52.724 12703.902 - 12754.314: 92.5496% ( 18) 00:07:52.724 12754.314 - 12804.726: 92.7106% ( 24) 00:07:52.724 12804.726 - 12855.138: 92.8313% ( 18) 00:07:52.724 12855.138 - 12905.551: 92.9721% ( 21) 00:07:52.724 12905.551 - 13006.375: 93.1800% ( 31) 00:07:52.724 13006.375 - 13107.200: 93.4214% ( 36) 00:07:52.724 13107.200 - 13208.025: 93.6427% ( 33) 00:07:52.724 13208.025 - 13308.849: 93.8908% ( 37) 00:07:52.724 13308.849 - 13409.674: 94.1188% ( 34) 00:07:52.724 13409.674 - 13510.498: 94.3468% ( 34) 00:07:52.724 13510.498 - 13611.323: 94.5815% ( 35) 00:07:52.724 13611.323 - 13712.148: 94.8163% ( 35) 00:07:52.724 13712.148 - 13812.972: 94.9973% ( 27) 00:07:52.724 13812.972 - 13913.797: 95.1985% ( 30) 00:07:52.724 13913.797 - 14014.622: 95.3997% ( 30) 00:07:52.724 14014.622 - 14115.446: 95.5673% ( 25) 00:07:52.724 14115.446 - 14216.271: 95.7014% ( 20) 00:07:52.724 14216.271 - 14317.095: 95.9093% ( 31) 00:07:52.724 14317.095 - 14417.920: 96.0770% ( 25) 00:07:52.724 14417.920 - 14518.745: 96.2178% ( 21) 00:07:52.724 14518.745 - 14619.569: 96.3720% ( 23) 00:07:52.724 14619.569 - 14720.394: 96.5665% ( 29) 00:07:52.724 14720.394 - 14821.218: 96.7677% ( 30) 00:07:52.724 14821.218 - 14922.043: 96.9152% ( 22) 00:07:52.724 14922.043 - 15022.868: 97.1164% ( 30) 00:07:52.724 15022.868 - 15123.692: 97.2908% ( 26) 00:07:52.724 15123.692 - 15224.517: 97.4718% ( 27) 00:07:52.724 15224.517 - 15325.342: 97.6529% ( 27) 00:07:52.724 15325.342 - 15426.166: 97.7602% ( 16) 00:07:52.724 15426.166 - 15526.991: 97.8675% ( 16) 00:07:52.724 15526.991 - 15627.815: 97.9547% ( 13) 00:07:52.724 15627.815 - 15728.640: 98.0486% ( 14) 00:07:52.724 15728.640 - 15829.465: 98.1558% ( 16) 00:07:52.724 15829.465 - 15930.289: 98.2296% ( 11) 00:07:52.724 15930.289 - 16031.114: 98.2967% ( 10) 00:07:52.724 16031.114 - 16131.938: 98.3704% ( 11) 00:07:52.724 16131.938 - 16232.763: 98.4107% ( 6) 00:07:52.724 16232.763 - 16333.588: 98.4777% ( 10) 00:07:52.724 16333.588 - 16434.412: 98.5448% ( 10) 00:07:52.724 16434.412 - 16535.237: 98.5917% ( 7) 00:07:52.724 16535.237 - 16636.062: 98.6186% ( 4) 00:07:52.724 16636.062 - 16736.886: 98.6454% ( 4) 00:07:52.724 16736.886 - 16837.711: 98.6722% ( 4) 00:07:52.724 16837.711 - 16938.535: 98.7057% ( 5) 00:07:52.724 16938.535 - 17039.360: 98.7124% ( 1) 00:07:52.724 17543.483 - 17644.308: 98.7326% ( 3) 00:07:52.724 17644.308 - 17745.132: 98.7661% ( 5) 00:07:52.724 17745.132 - 17845.957: 98.7929% ( 4) 00:07:52.724 17845.957 - 17946.782: 98.8197% ( 4) 00:07:52.724 17946.782 - 18047.606: 98.8533% ( 5) 00:07:52.724 18047.606 - 18148.431: 98.8801% ( 4) 00:07:52.724 18148.431 - 18249.255: 98.9136% ( 5) 00:07:52.724 18249.255 - 18350.080: 98.9405% ( 4) 00:07:52.724 18350.080 - 18450.905: 98.9606% ( 3) 00:07:52.724 18450.905 - 18551.729: 99.0008% ( 6) 00:07:52.724 18551.729 - 18652.554: 99.0343% ( 5) 00:07:52.724 18652.554 - 18753.378: 99.0612% ( 4) 00:07:52.724 18753.378 - 18854.203: 99.1014% ( 6) 00:07:52.724 18854.203 - 18955.028: 99.1282% ( 4) 00:07:52.724 18955.028 - 19055.852: 99.1416% ( 2) 00:07:52.724 19862.449 - 19963.274: 99.1550% ( 2) 00:07:52.724 19963.274 - 20064.098: 99.1819% ( 4) 00:07:52.724 20064.098 - 20164.923: 99.2020% ( 3) 00:07:52.724 20164.923 - 20265.748: 99.2221% ( 3) 00:07:52.724 20265.748 - 20366.572: 99.2422% ( 3) 00:07:52.724 20366.572 - 20467.397: 99.2690% ( 4) 00:07:52.724 20467.397 - 20568.222: 99.2959% ( 4) 00:07:52.724 20568.222 - 20669.046: 99.3227% ( 4) 00:07:52.724 20669.046 - 20769.871: 99.3428% ( 3) 00:07:52.724 20769.871 - 20870.695: 99.3696% ( 4) 00:07:52.724 20870.695 - 20971.520: 99.3898% ( 3) 00:07:52.724 20971.520 - 21072.345: 99.4166% ( 4) 00:07:52.724 21072.345 - 21173.169: 99.4434% ( 4) 00:07:52.724 21173.169 - 21273.994: 99.4702% ( 4) 00:07:52.724 21273.994 - 21374.818: 99.4970% ( 4) 00:07:52.724 21374.818 - 21475.643: 99.5239% ( 4) 00:07:52.724 21475.643 - 21576.468: 99.5507% ( 4) 00:07:52.724 21576.468 - 21677.292: 99.5708% ( 3) 00:07:52.724 26214.400 - 26416.049: 99.5775% ( 1) 00:07:52.724 26416.049 - 26617.698: 99.6111% ( 5) 00:07:52.724 26617.698 - 26819.348: 99.6647% ( 8) 00:07:52.724 26819.348 - 27020.997: 99.7049% ( 6) 00:07:52.724 27020.997 - 27222.646: 99.7519% ( 7) 00:07:52.724 27222.646 - 27424.295: 99.7921% ( 6) 00:07:52.724 27424.295 - 27625.945: 99.8391% ( 7) 00:07:52.724 27625.945 - 27827.594: 99.8860% ( 7) 00:07:52.724 27827.594 - 28029.243: 99.9262% ( 6) 00:07:52.724 28029.243 - 28230.892: 99.9799% ( 8) 00:07:52.724 28230.892 - 28432.542: 100.0000% ( 3) 00:07:52.724 00:07:52.724 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:52.724 ============================================================================== 00:07:52.724 Range in us Cumulative IO count 00:07:52.724 5772.209 - 5797.415: 0.0134% ( 2) 00:07:52.724 5797.415 - 5822.622: 0.0268% ( 2) 00:07:52.724 5822.622 - 5847.828: 0.0671% ( 6) 00:07:52.724 5847.828 - 5873.034: 0.1274% ( 9) 00:07:52.724 5873.034 - 5898.240: 0.1945% ( 10) 00:07:52.724 5898.240 - 5923.446: 0.3487% ( 23) 00:07:52.724 5923.446 - 5948.652: 0.5298% ( 27) 00:07:52.724 5948.652 - 5973.858: 0.7444% ( 32) 00:07:52.724 5973.858 - 5999.065: 1.0461% ( 45) 00:07:52.724 5999.065 - 6024.271: 1.4016% ( 53) 00:07:52.724 6024.271 - 6049.477: 1.7637% ( 54) 00:07:52.724 6049.477 - 6074.683: 2.1862% ( 63) 00:07:52.724 6074.683 - 6099.889: 2.6824% ( 74) 00:07:52.724 6099.889 - 6125.095: 3.3061% ( 93) 00:07:52.724 6125.095 - 6150.302: 4.0236% ( 107) 00:07:52.724 6150.302 - 6175.508: 4.7881% ( 114) 00:07:52.724 6175.508 - 6200.714: 5.5794% ( 118) 00:07:52.724 6200.714 - 6225.920: 6.4713% ( 133) 00:07:52.724 6225.920 - 6251.126: 7.3833% ( 136) 00:07:52.724 6251.126 - 6276.332: 8.3356% ( 142) 00:07:52.724 6276.332 - 6301.538: 9.3079% ( 145) 00:07:52.724 6301.538 - 6326.745: 10.3205% ( 151) 00:07:52.724 6326.745 - 6351.951: 11.4472% ( 168) 00:07:52.724 6351.951 - 6377.157: 12.5000% ( 157) 00:07:52.724 6377.157 - 6402.363: 13.5931% ( 163) 00:07:52.724 6402.363 - 6427.569: 14.6795% ( 162) 00:07:52.724 6427.569 - 6452.775: 15.7189% ( 155) 00:07:52.724 6452.775 - 6503.188: 17.9050% ( 326) 00:07:52.724 6503.188 - 6553.600: 20.1448% ( 334) 00:07:52.724 6553.600 - 6604.012: 22.4182% ( 339) 00:07:52.724 6604.012 - 6654.425: 24.7720% ( 351) 00:07:52.724 6654.425 - 6704.837: 27.0252% ( 336) 00:07:52.724 6704.837 - 6755.249: 29.3522% ( 347) 00:07:52.724 6755.249 - 6805.662: 31.5451% ( 327) 00:07:52.724 6805.662 - 6856.074: 33.5032% ( 292) 00:07:52.724 6856.074 - 6906.486: 35.3407% ( 274) 00:07:52.724 6906.486 - 6956.898: 36.9903% ( 246) 00:07:52.724 6956.898 - 7007.311: 38.3986% ( 210) 00:07:52.724 7007.311 - 7057.723: 39.5185% ( 167) 00:07:52.724 7057.723 - 7108.135: 40.4708% ( 142) 00:07:52.724 7108.135 - 7158.548: 41.2285% ( 113) 00:07:52.724 7158.548 - 7208.960: 41.8924% ( 99) 00:07:52.724 7208.960 - 7259.372: 42.4289% ( 80) 00:07:52.724 7259.372 - 7309.785: 42.9788% ( 82) 00:07:52.725 7309.785 - 7360.197: 43.4885% ( 76) 00:07:52.725 7360.197 - 7410.609: 43.9847% ( 74) 00:07:52.725 7410.609 - 7461.022: 44.4273% ( 66) 00:07:52.725 7461.022 - 7511.434: 44.8967% ( 70) 00:07:52.725 7511.434 - 7561.846: 45.4600% ( 84) 00:07:52.725 7561.846 - 7612.258: 46.0367% ( 86) 00:07:52.725 7612.258 - 7662.671: 46.6001% ( 84) 00:07:52.725 7662.671 - 7713.083: 47.2707% ( 100) 00:07:52.725 7713.083 - 7763.495: 48.1290% ( 128) 00:07:52.725 7763.495 - 7813.908: 49.0209% ( 133) 00:07:52.725 7813.908 - 7864.320: 49.8055% ( 117) 00:07:52.725 7864.320 - 7914.732: 50.6170% ( 121) 00:07:52.725 7914.732 - 7965.145: 51.5491% ( 139) 00:07:52.725 7965.145 - 8015.557: 52.5215% ( 145) 00:07:52.725 8015.557 - 8065.969: 53.4335% ( 136) 00:07:52.725 8065.969 - 8116.382: 54.3388% ( 135) 00:07:52.725 8116.382 - 8166.794: 55.2307% ( 133) 00:07:52.725 8166.794 - 8217.206: 56.1025% ( 130) 00:07:52.725 8217.206 - 8267.618: 56.9944% ( 133) 00:07:52.725 8267.618 - 8318.031: 57.8125% ( 122) 00:07:52.725 8318.031 - 8368.443: 58.6239% ( 121) 00:07:52.725 8368.443 - 8418.855: 59.4488% ( 123) 00:07:52.725 8418.855 - 8469.268: 60.2468% ( 119) 00:07:52.725 8469.268 - 8519.680: 61.1052% ( 128) 00:07:52.725 8519.680 - 8570.092: 61.8696% ( 114) 00:07:52.725 8570.092 - 8620.505: 62.6677% ( 119) 00:07:52.725 8620.505 - 8670.917: 63.3584% ( 103) 00:07:52.725 8670.917 - 8721.329: 63.9619% ( 90) 00:07:52.725 8721.329 - 8771.742: 64.5520% ( 88) 00:07:52.725 8771.742 - 8822.154: 65.1690% ( 92) 00:07:52.725 8822.154 - 8872.566: 65.7658% ( 89) 00:07:52.725 8872.566 - 8922.978: 66.2554% ( 73) 00:07:52.725 8922.978 - 8973.391: 66.7449% ( 73) 00:07:52.725 8973.391 - 9023.803: 67.2948% ( 82) 00:07:52.725 9023.803 - 9074.215: 67.8447% ( 82) 00:07:52.725 9074.215 - 9124.628: 68.4080% ( 84) 00:07:52.725 9124.628 - 9175.040: 68.9109% ( 75) 00:07:52.725 9175.040 - 9225.452: 69.4206% ( 76) 00:07:52.725 9225.452 - 9275.865: 69.8900% ( 70) 00:07:52.725 9275.865 - 9326.277: 70.4064% ( 77) 00:07:52.725 9326.277 - 9376.689: 70.9831% ( 86) 00:07:52.725 9376.689 - 9427.102: 71.5397% ( 83) 00:07:52.725 9427.102 - 9477.514: 72.1030% ( 84) 00:07:52.725 9477.514 - 9527.926: 72.6328% ( 79) 00:07:52.725 9527.926 - 9578.338: 73.1424% ( 76) 00:07:52.725 9578.338 - 9628.751: 73.6856% ( 81) 00:07:52.725 9628.751 - 9679.163: 74.1953% ( 76) 00:07:52.725 9679.163 - 9729.575: 74.7385% ( 81) 00:07:52.725 9729.575 - 9779.988: 75.2682% ( 79) 00:07:52.725 9779.988 - 9830.400: 75.7779% ( 76) 00:07:52.725 9830.400 - 9880.812: 76.3680% ( 88) 00:07:52.725 9880.812 - 9931.225: 76.9179% ( 82) 00:07:52.725 9931.225 - 9981.637: 77.4276% ( 76) 00:07:52.725 9981.637 - 10032.049: 77.9909% ( 84) 00:07:52.725 10032.049 - 10082.462: 78.4670% ( 71) 00:07:52.725 10082.462 - 10132.874: 78.9498% ( 72) 00:07:52.725 10132.874 - 10183.286: 79.3321% ( 57) 00:07:52.725 10183.286 - 10233.698: 79.7814% ( 67) 00:07:52.725 10233.698 - 10284.111: 80.1703% ( 58) 00:07:52.725 10284.111 - 10334.523: 80.5727% ( 60) 00:07:52.725 10334.523 - 10384.935: 80.9147% ( 51) 00:07:52.725 10384.935 - 10435.348: 81.2500% ( 50) 00:07:52.725 10435.348 - 10485.760: 81.6054% ( 53) 00:07:52.725 10485.760 - 10536.172: 81.9877% ( 57) 00:07:52.725 10536.172 - 10586.585: 82.3833% ( 59) 00:07:52.725 10586.585 - 10636.997: 82.8125% ( 64) 00:07:52.725 10636.997 - 10687.409: 83.2082% ( 59) 00:07:52.725 10687.409 - 10737.822: 83.5636% ( 53) 00:07:52.725 10737.822 - 10788.234: 83.9123% ( 52) 00:07:52.725 10788.234 - 10838.646: 84.2275% ( 47) 00:07:52.725 10838.646 - 10889.058: 84.5024% ( 41) 00:07:52.725 10889.058 - 10939.471: 84.7639% ( 39) 00:07:52.725 10939.471 - 10989.883: 85.0188% ( 38) 00:07:52.725 10989.883 - 11040.295: 85.2803% ( 39) 00:07:52.725 11040.295 - 11090.708: 85.5217% ( 36) 00:07:52.725 11090.708 - 11141.120: 85.7564% ( 35) 00:07:52.725 11141.120 - 11191.532: 86.0314% ( 41) 00:07:52.725 11191.532 - 11241.945: 86.2728% ( 36) 00:07:52.725 11241.945 - 11292.357: 86.5142% ( 36) 00:07:52.725 11292.357 - 11342.769: 86.7288% ( 32) 00:07:52.725 11342.769 - 11393.182: 86.9300% ( 30) 00:07:52.725 11393.182 - 11443.594: 87.1647% ( 35) 00:07:52.725 11443.594 - 11494.006: 87.3726% ( 31) 00:07:52.725 11494.006 - 11544.418: 87.5872% ( 32) 00:07:52.725 11544.418 - 11594.831: 87.7884% ( 30) 00:07:52.725 11594.831 - 11645.243: 87.9761% ( 28) 00:07:52.725 11645.243 - 11695.655: 88.2041% ( 34) 00:07:52.725 11695.655 - 11746.068: 88.4254% ( 33) 00:07:52.725 11746.068 - 11796.480: 88.6333% ( 31) 00:07:52.725 11796.480 - 11846.892: 88.8613% ( 34) 00:07:52.725 11846.892 - 11897.305: 89.0558% ( 29) 00:07:52.725 11897.305 - 11947.717: 89.2771% ( 33) 00:07:52.725 11947.717 - 11998.129: 89.4917% ( 32) 00:07:52.725 11998.129 - 12048.542: 89.7197% ( 34) 00:07:52.725 12048.542 - 12098.954: 89.9477% ( 34) 00:07:52.725 12098.954 - 12149.366: 90.1690% ( 33) 00:07:52.725 12149.366 - 12199.778: 90.3903% ( 33) 00:07:52.725 12199.778 - 12250.191: 90.6183% ( 34) 00:07:52.725 12250.191 - 12300.603: 90.8262% ( 31) 00:07:52.725 12300.603 - 12351.015: 91.0408% ( 32) 00:07:52.725 12351.015 - 12401.428: 91.2352% ( 29) 00:07:52.725 12401.428 - 12451.840: 91.4163% ( 27) 00:07:52.725 12451.840 - 12502.252: 91.6041% ( 28) 00:07:52.725 12502.252 - 12552.665: 91.7717% ( 25) 00:07:52.725 12552.665 - 12603.077: 91.9461% ( 26) 00:07:52.725 12603.077 - 12653.489: 92.0936% ( 22) 00:07:52.725 12653.489 - 12703.902: 92.2479% ( 23) 00:07:52.725 12703.902 - 12754.314: 92.3820% ( 20) 00:07:52.725 12754.314 - 12804.726: 92.5228% ( 21) 00:07:52.725 12804.726 - 12855.138: 92.6435% ( 18) 00:07:52.725 12855.138 - 12905.551: 92.7575% ( 17) 00:07:52.725 12905.551 - 13006.375: 92.9386% ( 27) 00:07:52.725 13006.375 - 13107.200: 93.1062% ( 25) 00:07:52.725 13107.200 - 13208.025: 93.2403% ( 20) 00:07:52.725 13208.025 - 13308.849: 93.4281% ( 28) 00:07:52.725 13308.849 - 13409.674: 93.6427% ( 32) 00:07:52.725 13409.674 - 13510.498: 93.8305% ( 28) 00:07:52.725 13510.498 - 13611.323: 93.9780% ( 22) 00:07:52.725 13611.323 - 13712.148: 94.1658% ( 28) 00:07:52.725 13712.148 - 13812.972: 94.3267% ( 24) 00:07:52.725 13812.972 - 13913.797: 94.4608% ( 20) 00:07:52.725 13913.797 - 14014.622: 94.6754% ( 32) 00:07:52.725 14014.622 - 14115.446: 94.9437% ( 40) 00:07:52.725 14115.446 - 14216.271: 95.1784% ( 35) 00:07:52.725 14216.271 - 14317.095: 95.4802% ( 45) 00:07:52.725 14317.095 - 14417.920: 95.8087% ( 49) 00:07:52.725 14417.920 - 14518.745: 96.1373% ( 49) 00:07:52.725 14518.745 - 14619.569: 96.4995% ( 54) 00:07:52.725 14619.569 - 14720.394: 96.8415% ( 51) 00:07:52.725 14720.394 - 14821.218: 97.1499% ( 46) 00:07:52.725 14821.218 - 14922.043: 97.4651% ( 47) 00:07:52.725 14922.043 - 15022.868: 97.7468% ( 42) 00:07:52.725 15022.868 - 15123.692: 98.0284% ( 42) 00:07:52.725 15123.692 - 15224.517: 98.2162% ( 28) 00:07:52.725 15224.517 - 15325.342: 98.3101% ( 14) 00:07:52.725 15325.342 - 15426.166: 98.3906% ( 12) 00:07:52.725 15426.166 - 15526.991: 98.4643% ( 11) 00:07:52.725 15526.991 - 15627.815: 98.5247% ( 9) 00:07:52.725 15627.815 - 15728.640: 98.5649% ( 6) 00:07:52.725 15728.640 - 15829.465: 98.5984% ( 5) 00:07:52.725 15829.465 - 15930.289: 98.6387% ( 6) 00:07:52.725 15930.289 - 16031.114: 98.6789% ( 6) 00:07:52.725 16031.114 - 16131.938: 98.7124% ( 5) 00:07:52.725 17845.957 - 17946.782: 98.7259% ( 2) 00:07:52.725 17946.782 - 18047.606: 98.7594% ( 5) 00:07:52.725 18047.606 - 18148.431: 98.7929% ( 5) 00:07:52.726 18148.431 - 18249.255: 98.8332% ( 6) 00:07:52.726 18249.255 - 18350.080: 98.8667% ( 5) 00:07:52.726 18350.080 - 18450.905: 98.9136% ( 7) 00:07:52.726 18450.905 - 18551.729: 98.9539% ( 6) 00:07:52.726 18551.729 - 18652.554: 98.9874% ( 5) 00:07:52.726 18652.554 - 18753.378: 99.0276% ( 6) 00:07:52.726 18753.378 - 18854.203: 99.0612% ( 5) 00:07:52.726 18854.203 - 18955.028: 99.1014% ( 6) 00:07:52.726 18955.028 - 19055.852: 99.1550% ( 8) 00:07:52.726 19055.852 - 19156.677: 99.1953% ( 6) 00:07:52.726 19156.677 - 19257.502: 99.2221% ( 4) 00:07:52.726 19257.502 - 19358.326: 99.2489% ( 4) 00:07:52.726 19358.326 - 19459.151: 99.2758% ( 4) 00:07:52.726 19459.151 - 19559.975: 99.3093% ( 5) 00:07:52.726 19559.975 - 19660.800: 99.3361% ( 4) 00:07:52.726 19660.800 - 19761.625: 99.3629% ( 4) 00:07:52.726 19761.625 - 19862.449: 99.3898% ( 4) 00:07:52.726 19862.449 - 19963.274: 99.4166% ( 4) 00:07:52.726 19963.274 - 20064.098: 99.4501% ( 5) 00:07:52.726 20064.098 - 20164.923: 99.4769% ( 4) 00:07:52.726 20164.923 - 20265.748: 99.5038% ( 4) 00:07:52.726 20265.748 - 20366.572: 99.5306% ( 4) 00:07:52.726 20366.572 - 20467.397: 99.5574% ( 4) 00:07:52.726 20467.397 - 20568.222: 99.5708% ( 2) 00:07:52.726 24802.855 - 24903.680: 99.5775% ( 1) 00:07:52.726 24903.680 - 25004.505: 99.6043% ( 4) 00:07:52.726 25004.505 - 25105.329: 99.6312% ( 4) 00:07:52.726 25105.329 - 25206.154: 99.6513% ( 3) 00:07:52.726 25206.154 - 25306.978: 99.6781% ( 4) 00:07:52.726 25306.978 - 25407.803: 99.6982% ( 3) 00:07:52.726 25407.803 - 25508.628: 99.7183% ( 3) 00:07:52.726 25508.628 - 25609.452: 99.7452% ( 4) 00:07:52.726 25609.452 - 25710.277: 99.7720% ( 4) 00:07:52.726 25710.277 - 25811.102: 99.7921% ( 3) 00:07:52.726 25811.102 - 26012.751: 99.8458% ( 8) 00:07:52.726 26012.751 - 26214.400: 99.8927% ( 7) 00:07:52.726 26214.400 - 26416.049: 99.9396% ( 7) 00:07:52.726 26416.049 - 26617.698: 99.9866% ( 7) 00:07:52.726 26617.698 - 26819.348: 100.0000% ( 2) 00:07:52.726 00:07:52.726 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:52.726 ============================================================================== 00:07:52.726 Range in us Cumulative IO count 00:07:52.726 5797.415 - 5822.622: 0.0067% ( 1) 00:07:52.726 5822.622 - 5847.828: 0.0201% ( 2) 00:07:52.726 5847.828 - 5873.034: 0.0335% ( 2) 00:07:52.726 5873.034 - 5898.240: 0.0805% ( 7) 00:07:52.726 5898.240 - 5923.446: 0.1945% ( 17) 00:07:52.726 5923.446 - 5948.652: 0.3487% ( 23) 00:07:52.726 5948.652 - 5973.858: 0.5700% ( 33) 00:07:52.726 5973.858 - 5999.065: 0.8450% ( 41) 00:07:52.726 5999.065 - 6024.271: 1.1333% ( 43) 00:07:52.726 6024.271 - 6049.477: 1.4619% ( 49) 00:07:52.726 6049.477 - 6074.683: 1.8978% ( 65) 00:07:52.726 6074.683 - 6099.889: 2.5483% ( 97) 00:07:52.726 6099.889 - 6125.095: 3.2524% ( 105) 00:07:52.726 6125.095 - 6150.302: 3.9364% ( 102) 00:07:52.726 6150.302 - 6175.508: 4.6741% ( 110) 00:07:52.726 6175.508 - 6200.714: 5.4855% ( 121) 00:07:52.726 6200.714 - 6225.920: 6.3774% ( 133) 00:07:52.726 6225.920 - 6251.126: 7.3095% ( 139) 00:07:52.726 6251.126 - 6276.332: 8.2953% ( 147) 00:07:52.726 6276.332 - 6301.538: 9.3146% ( 152) 00:07:52.726 6301.538 - 6326.745: 10.3809% ( 159) 00:07:52.726 6326.745 - 6351.951: 11.5142% ( 169) 00:07:52.726 6351.951 - 6377.157: 12.6006% ( 162) 00:07:52.726 6377.157 - 6402.363: 13.7071% ( 165) 00:07:52.726 6402.363 - 6427.569: 14.7599% ( 157) 00:07:52.726 6427.569 - 6452.775: 15.8396% ( 161) 00:07:52.726 6452.775 - 6503.188: 17.9721% ( 318) 00:07:52.726 6503.188 - 6553.600: 20.2253% ( 336) 00:07:52.726 6553.600 - 6604.012: 22.5925% ( 353) 00:07:52.726 6604.012 - 6654.425: 24.9195% ( 347) 00:07:52.726 6654.425 - 6704.837: 27.3002% ( 355) 00:07:52.726 6704.837 - 6755.249: 29.6741% ( 354) 00:07:52.726 6755.249 - 6805.662: 31.8938% ( 331) 00:07:52.726 6805.662 - 6856.074: 33.9525% ( 307) 00:07:52.726 6856.074 - 6906.486: 35.8839% ( 288) 00:07:52.726 6906.486 - 6956.898: 37.5134% ( 243) 00:07:52.726 6956.898 - 7007.311: 38.8747% ( 203) 00:07:52.726 7007.311 - 7057.723: 39.9745% ( 164) 00:07:52.726 7057.723 - 7108.135: 40.9201% ( 141) 00:07:52.726 7108.135 - 7158.548: 41.6577% ( 110) 00:07:52.726 7158.548 - 7208.960: 42.2747% ( 92) 00:07:52.726 7208.960 - 7259.372: 42.7776% ( 75) 00:07:52.726 7259.372 - 7309.785: 43.2940% ( 77) 00:07:52.726 7309.785 - 7360.197: 43.8104% ( 77) 00:07:52.726 7360.197 - 7410.609: 44.2395% ( 64) 00:07:52.726 7410.609 - 7461.022: 44.7224% ( 72) 00:07:52.726 7461.022 - 7511.434: 45.1516% ( 64) 00:07:52.726 7511.434 - 7561.846: 45.6478% ( 74) 00:07:52.726 7561.846 - 7612.258: 46.1910% ( 81) 00:07:52.726 7612.258 - 7662.671: 46.7409% ( 82) 00:07:52.726 7662.671 - 7713.083: 47.5054% ( 114) 00:07:52.726 7713.083 - 7763.495: 48.1693% ( 99) 00:07:52.726 7763.495 - 7813.908: 48.8600% ( 103) 00:07:52.726 7813.908 - 7864.320: 49.5574% ( 104) 00:07:52.726 7864.320 - 7914.732: 50.3018% ( 111) 00:07:52.726 7914.732 - 7965.145: 51.0998% ( 119) 00:07:52.726 7965.145 - 8015.557: 51.9179% ( 122) 00:07:52.726 8015.557 - 8065.969: 52.7495% ( 124) 00:07:52.726 8065.969 - 8116.382: 53.5810% ( 124) 00:07:52.726 8116.382 - 8166.794: 54.4595% ( 131) 00:07:52.726 8166.794 - 8217.206: 55.3380% ( 131) 00:07:52.726 8217.206 - 8267.618: 56.2634% ( 138) 00:07:52.726 8267.618 - 8318.031: 57.1888% ( 138) 00:07:52.726 8318.031 - 8368.443: 58.1478% ( 143) 00:07:52.726 8368.443 - 8418.855: 59.0799% ( 139) 00:07:52.726 8418.855 - 8469.268: 60.0054% ( 138) 00:07:52.726 8469.268 - 8519.680: 60.9107% ( 135) 00:07:52.726 8519.680 - 8570.092: 61.7959% ( 132) 00:07:52.726 8570.092 - 8620.505: 62.6677% ( 130) 00:07:52.726 8620.505 - 8670.917: 63.4791% ( 121) 00:07:52.726 8670.917 - 8721.329: 64.2503% ( 115) 00:07:52.726 8721.329 - 8771.742: 64.9745% ( 108) 00:07:52.726 8771.742 - 8822.154: 65.6786% ( 105) 00:07:52.726 8822.154 - 8872.566: 66.3224% ( 96) 00:07:52.726 8872.566 - 8922.978: 66.8991% ( 86) 00:07:52.726 8922.978 - 8973.391: 67.4624% ( 84) 00:07:52.726 8973.391 - 9023.803: 67.9587% ( 74) 00:07:52.726 9023.803 - 9074.215: 68.4415% ( 72) 00:07:52.726 9074.215 - 9124.628: 68.8975% ( 68) 00:07:52.726 9124.628 - 9175.040: 69.3804% ( 72) 00:07:52.726 9175.040 - 9225.452: 69.8766% ( 74) 00:07:52.726 9225.452 - 9275.865: 70.3192% ( 66) 00:07:52.726 9275.865 - 9326.277: 70.7014% ( 57) 00:07:52.726 9326.277 - 9376.689: 71.1373% ( 65) 00:07:52.726 9376.689 - 9427.102: 71.5598% ( 63) 00:07:52.726 9427.102 - 9477.514: 71.9890% ( 64) 00:07:52.726 9477.514 - 9527.926: 72.4048% ( 62) 00:07:52.726 9527.926 - 9578.338: 72.8943% ( 73) 00:07:52.726 9578.338 - 9628.751: 73.3906% ( 74) 00:07:52.726 9628.751 - 9679.163: 73.8533% ( 69) 00:07:52.726 9679.163 - 9729.575: 74.3696% ( 77) 00:07:52.726 9729.575 - 9779.988: 74.8592% ( 73) 00:07:52.726 9779.988 - 9830.400: 75.3286% ( 70) 00:07:52.727 9830.400 - 9880.812: 75.8450% ( 77) 00:07:52.727 9880.812 - 9931.225: 76.3948% ( 82) 00:07:52.727 9931.225 - 9981.637: 76.9179% ( 78) 00:07:52.727 9981.637 - 10032.049: 77.4410% ( 78) 00:07:52.727 10032.049 - 10082.462: 78.0244% ( 87) 00:07:52.727 10082.462 - 10132.874: 78.5207% ( 74) 00:07:52.727 10132.874 - 10183.286: 79.0504% ( 79) 00:07:52.727 10183.286 - 10233.698: 79.5400% ( 73) 00:07:52.727 10233.698 - 10284.111: 79.9893% ( 67) 00:07:52.727 10284.111 - 10334.523: 80.4587% ( 70) 00:07:52.727 10334.523 - 10384.935: 80.8543% ( 59) 00:07:52.727 10384.935 - 10435.348: 81.2232% ( 55) 00:07:52.727 10435.348 - 10485.760: 81.6322% ( 61) 00:07:52.727 10485.760 - 10536.172: 82.0346% ( 60) 00:07:52.727 10536.172 - 10586.585: 82.3565% ( 48) 00:07:52.727 10586.585 - 10636.997: 82.7320% ( 56) 00:07:52.727 10636.997 - 10687.409: 83.0942% ( 54) 00:07:52.727 10687.409 - 10737.822: 83.4563% ( 54) 00:07:52.727 10737.822 - 10788.234: 83.8117% ( 53) 00:07:52.727 10788.234 - 10838.646: 84.1336% ( 48) 00:07:52.727 10838.646 - 10889.058: 84.4622% ( 49) 00:07:52.727 10889.058 - 10939.471: 84.7908% ( 49) 00:07:52.727 10939.471 - 10989.883: 85.0456% ( 38) 00:07:52.727 10989.883 - 11040.295: 85.3407% ( 44) 00:07:52.727 11040.295 - 11090.708: 85.6089% ( 40) 00:07:52.727 11090.708 - 11141.120: 85.8168% ( 31) 00:07:52.727 11141.120 - 11191.532: 86.0515% ( 35) 00:07:52.727 11191.532 - 11241.945: 86.2862% ( 35) 00:07:52.727 11241.945 - 11292.357: 86.5075% ( 33) 00:07:52.727 11292.357 - 11342.769: 86.7154% ( 31) 00:07:52.727 11342.769 - 11393.182: 86.8696% ( 23) 00:07:52.727 11393.182 - 11443.594: 87.0172% ( 22) 00:07:52.727 11443.594 - 11494.006: 87.1714% ( 23) 00:07:52.727 11494.006 - 11544.418: 87.3592% ( 28) 00:07:52.727 11544.418 - 11594.831: 87.5536% ( 29) 00:07:52.727 11594.831 - 11645.243: 87.7682% ( 32) 00:07:52.727 11645.243 - 11695.655: 87.9761% ( 31) 00:07:52.727 11695.655 - 11746.068: 88.1639% ( 28) 00:07:52.727 11746.068 - 11796.480: 88.3517% ( 28) 00:07:52.727 11796.480 - 11846.892: 88.5394% ( 28) 00:07:52.727 11846.892 - 11897.305: 88.7071% ( 25) 00:07:52.727 11897.305 - 11947.717: 88.8546% ( 22) 00:07:52.727 11947.717 - 11998.129: 89.0424% ( 28) 00:07:52.727 11998.129 - 12048.542: 89.2234% ( 27) 00:07:52.727 12048.542 - 12098.954: 89.4246% ( 30) 00:07:52.727 12098.954 - 12149.366: 89.6124% ( 28) 00:07:52.727 12149.366 - 12199.778: 89.8136% ( 30) 00:07:52.727 12199.778 - 12250.191: 90.0215% ( 31) 00:07:52.727 12250.191 - 12300.603: 90.2293% ( 31) 00:07:52.727 12300.603 - 12351.015: 90.4104% ( 27) 00:07:52.727 12351.015 - 12401.428: 90.6384% ( 34) 00:07:52.727 12401.428 - 12451.840: 90.8329% ( 29) 00:07:52.727 12451.840 - 12502.252: 91.0408% ( 31) 00:07:52.727 12502.252 - 12552.665: 91.2487% ( 31) 00:07:52.727 12552.665 - 12603.077: 91.4633% ( 32) 00:07:52.727 12603.077 - 12653.489: 91.6845% ( 33) 00:07:52.727 12653.489 - 12703.902: 91.8723% ( 28) 00:07:52.727 12703.902 - 12754.314: 92.0735% ( 30) 00:07:52.727 12754.314 - 12804.726: 92.2546% ( 27) 00:07:52.727 12804.726 - 12855.138: 92.4490% ( 29) 00:07:52.727 12855.138 - 12905.551: 92.6301% ( 27) 00:07:52.727 12905.551 - 13006.375: 92.9855% ( 53) 00:07:52.727 13006.375 - 13107.200: 93.3342% ( 52) 00:07:52.727 13107.200 - 13208.025: 93.7031% ( 55) 00:07:52.727 13208.025 - 13308.849: 94.0451% ( 51) 00:07:52.727 13308.849 - 13409.674: 94.3334% ( 43) 00:07:52.727 13409.674 - 13510.498: 94.5547% ( 33) 00:07:52.727 13510.498 - 13611.323: 94.7492% ( 29) 00:07:52.727 13611.323 - 13712.148: 94.9504% ( 30) 00:07:52.727 13712.148 - 13812.972: 95.1381% ( 28) 00:07:52.727 13812.972 - 13913.797: 95.2656% ( 19) 00:07:52.727 13913.797 - 14014.622: 95.4332% ( 25) 00:07:52.727 14014.622 - 14115.446: 95.6210% ( 28) 00:07:52.727 14115.446 - 14216.271: 95.8423% ( 33) 00:07:52.727 14216.271 - 14317.095: 96.0233% ( 27) 00:07:52.727 14317.095 - 14417.920: 96.1977% ( 26) 00:07:52.727 14417.920 - 14518.745: 96.3922% ( 29) 00:07:52.727 14518.745 - 14619.569: 96.6269% ( 35) 00:07:52.727 14619.569 - 14720.394: 96.8817% ( 38) 00:07:52.727 14720.394 - 14821.218: 97.1164% ( 35) 00:07:52.727 14821.218 - 14922.043: 97.3176% ( 30) 00:07:52.727 14922.043 - 15022.868: 97.5389% ( 33) 00:07:52.727 15022.868 - 15123.692: 97.7267% ( 28) 00:07:52.727 15123.692 - 15224.517: 97.9144% ( 28) 00:07:52.727 15224.517 - 15325.342: 98.0418% ( 19) 00:07:52.727 15325.342 - 15426.166: 98.1827% ( 21) 00:07:52.727 15426.166 - 15526.991: 98.2766% ( 14) 00:07:52.727 15526.991 - 15627.815: 98.3503% ( 11) 00:07:52.727 15627.815 - 15728.640: 98.4040% ( 8) 00:07:52.727 15728.640 - 15829.465: 98.4777% ( 11) 00:07:52.727 15829.465 - 15930.289: 98.5582% ( 12) 00:07:52.727 15930.289 - 16031.114: 98.6186% ( 9) 00:07:52.727 16031.114 - 16131.938: 98.6923% ( 11) 00:07:52.727 16131.938 - 16232.763: 98.7124% ( 3) 00:07:52.727 17644.308 - 17745.132: 98.7326% ( 3) 00:07:52.727 17745.132 - 17845.957: 98.7594% ( 4) 00:07:52.727 17845.957 - 17946.782: 98.7862% ( 4) 00:07:52.727 17946.782 - 18047.606: 98.8130% ( 4) 00:07:52.727 18047.606 - 18148.431: 98.8734% ( 9) 00:07:52.727 18148.431 - 18249.255: 98.9405% ( 10) 00:07:52.727 18249.255 - 18350.080: 99.0075% ( 10) 00:07:52.727 18350.080 - 18450.905: 99.0679% ( 9) 00:07:52.727 18450.905 - 18551.729: 99.1349% ( 10) 00:07:52.727 18551.729 - 18652.554: 99.2020% ( 10) 00:07:52.727 18652.554 - 18753.378: 99.2355% ( 5) 00:07:52.727 18753.378 - 18854.203: 99.2959% ( 9) 00:07:52.727 18854.203 - 18955.028: 99.3629% ( 10) 00:07:52.727 18955.028 - 19055.852: 99.4300% ( 10) 00:07:52.727 19055.852 - 19156.677: 99.4903% ( 9) 00:07:52.727 19156.677 - 19257.502: 99.5373% ( 7) 00:07:52.727 19257.502 - 19358.326: 99.5708% ( 5) 00:07:52.727 23693.785 - 23794.609: 99.5842% ( 2) 00:07:52.727 23794.609 - 23895.434: 99.6043% ( 3) 00:07:52.727 23895.434 - 23996.258: 99.6312% ( 4) 00:07:52.727 23996.258 - 24097.083: 99.6446% ( 2) 00:07:52.727 24097.083 - 24197.908: 99.6714% ( 4) 00:07:52.727 24197.908 - 24298.732: 99.6915% ( 3) 00:07:52.727 24298.732 - 24399.557: 99.7116% ( 3) 00:07:52.727 24399.557 - 24500.382: 99.7385% ( 4) 00:07:52.727 24500.382 - 24601.206: 99.7653% ( 4) 00:07:52.727 24601.206 - 24702.031: 99.7854% ( 3) 00:07:52.727 24702.031 - 24802.855: 99.8122% ( 4) 00:07:52.727 24802.855 - 24903.680: 99.8391% ( 4) 00:07:52.727 24903.680 - 25004.505: 99.8592% ( 3) 00:07:52.727 25004.505 - 25105.329: 99.8860% ( 4) 00:07:52.727 25105.329 - 25206.154: 99.9128% ( 4) 00:07:52.727 25206.154 - 25306.978: 99.9329% ( 3) 00:07:52.727 25306.978 - 25407.803: 99.9598% ( 4) 00:07:52.727 25407.803 - 25508.628: 99.9866% ( 4) 00:07:52.727 25508.628 - 25609.452: 100.0000% ( 2) 00:07:52.727 00:07:52.727 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:52.727 ============================================================================== 00:07:52.727 Range in us Cumulative IO count 00:07:52.727 5797.415 - 5822.622: 0.0134% ( 2) 00:07:52.727 5822.622 - 5847.828: 0.0335% ( 3) 00:07:52.727 5847.828 - 5873.034: 0.0604% ( 4) 00:07:52.727 5873.034 - 5898.240: 0.1140% ( 8) 00:07:52.728 5898.240 - 5923.446: 0.2414% ( 19) 00:07:52.728 5923.446 - 5948.652: 0.4091% ( 25) 00:07:52.728 5948.652 - 5973.858: 0.5968% ( 28) 00:07:52.728 5973.858 - 5999.065: 0.8785% ( 42) 00:07:52.728 5999.065 - 6024.271: 1.1937% ( 47) 00:07:52.728 6024.271 - 6049.477: 1.6027% ( 61) 00:07:52.728 6049.477 - 6074.683: 2.1057% ( 75) 00:07:52.728 6074.683 - 6099.889: 2.6355% ( 79) 00:07:52.728 6099.889 - 6125.095: 3.2390% ( 90) 00:07:52.728 6125.095 - 6150.302: 3.9364% ( 104) 00:07:52.728 6150.302 - 6175.508: 4.6204% ( 102) 00:07:52.728 6175.508 - 6200.714: 5.4788% ( 128) 00:07:52.728 6200.714 - 6225.920: 6.4780% ( 149) 00:07:52.728 6225.920 - 6251.126: 7.4370% ( 143) 00:07:52.728 6251.126 - 6276.332: 8.4093% ( 145) 00:07:52.728 6276.332 - 6301.538: 9.4555% ( 156) 00:07:52.728 6301.538 - 6326.745: 10.4748% ( 152) 00:07:52.728 6326.745 - 6351.951: 11.5545% ( 161) 00:07:52.728 6351.951 - 6377.157: 12.6274% ( 160) 00:07:52.728 6377.157 - 6402.363: 13.7607% ( 169) 00:07:52.728 6402.363 - 6427.569: 14.8940% ( 169) 00:07:52.728 6427.569 - 6452.775: 15.9737% ( 161) 00:07:52.728 6452.775 - 6503.188: 18.1934% ( 331) 00:07:52.728 6503.188 - 6553.600: 20.6143% ( 361) 00:07:52.728 6553.600 - 6604.012: 22.9815% ( 353) 00:07:52.728 6604.012 - 6654.425: 25.3688% ( 356) 00:07:52.728 6654.425 - 6704.837: 27.7159% ( 350) 00:07:52.728 6704.837 - 6755.249: 30.0832% ( 353) 00:07:52.728 6755.249 - 6805.662: 32.3028% ( 331) 00:07:52.728 6805.662 - 6856.074: 34.3750% ( 309) 00:07:52.728 6856.074 - 6906.486: 36.2661% ( 282) 00:07:52.728 6906.486 - 6956.898: 37.8420% ( 235) 00:07:52.728 6956.898 - 7007.311: 39.1966% ( 202) 00:07:52.728 7007.311 - 7057.723: 40.2629% ( 159) 00:07:52.728 7057.723 - 7108.135: 41.1347% ( 130) 00:07:52.728 7108.135 - 7158.548: 41.8455% ( 106) 00:07:52.728 7158.548 - 7208.960: 42.3820% ( 80) 00:07:52.728 7208.960 - 7259.372: 42.9050% ( 78) 00:07:52.728 7259.372 - 7309.785: 43.3879% ( 72) 00:07:52.728 7309.785 - 7360.197: 43.8908% ( 75) 00:07:52.728 7360.197 - 7410.609: 44.3200% ( 64) 00:07:52.728 7410.609 - 7461.022: 44.7291% ( 61) 00:07:52.728 7461.022 - 7511.434: 45.1381% ( 61) 00:07:52.728 7511.434 - 7561.846: 45.6344% ( 74) 00:07:52.728 7561.846 - 7612.258: 46.1373% ( 75) 00:07:52.728 7612.258 - 7662.671: 46.6671% ( 79) 00:07:52.728 7662.671 - 7713.083: 47.2639% ( 89) 00:07:52.728 7713.083 - 7763.495: 47.9278% ( 99) 00:07:52.728 7763.495 - 7813.908: 48.6119% ( 102) 00:07:52.728 7813.908 - 7864.320: 49.2422% ( 94) 00:07:52.728 7864.320 - 7914.732: 49.9531% ( 106) 00:07:52.728 7914.732 - 7965.145: 50.5901% ( 95) 00:07:52.728 7965.145 - 8015.557: 51.3747% ( 117) 00:07:52.728 8015.557 - 8065.969: 52.1593% ( 117) 00:07:52.728 8065.969 - 8116.382: 52.9641% ( 120) 00:07:52.728 8116.382 - 8166.794: 53.8224% ( 128) 00:07:52.728 8166.794 - 8217.206: 54.6808% ( 128) 00:07:52.728 8217.206 - 8267.618: 55.6532% ( 145) 00:07:52.728 8267.618 - 8318.031: 56.6322% ( 146) 00:07:52.728 8318.031 - 8368.443: 57.6180% ( 147) 00:07:52.728 8368.443 - 8418.855: 58.5837% ( 144) 00:07:52.728 8418.855 - 8469.268: 59.5494% ( 144) 00:07:52.728 8469.268 - 8519.680: 60.5217% ( 145) 00:07:52.728 8519.680 - 8570.092: 61.4606% ( 140) 00:07:52.728 8570.092 - 8620.505: 62.4665% ( 150) 00:07:52.728 8620.505 - 8670.917: 63.4321% ( 144) 00:07:52.728 8670.917 - 8721.329: 64.2905% ( 128) 00:07:52.728 8721.329 - 8771.742: 65.1422% ( 127) 00:07:52.728 8771.742 - 8822.154: 65.9268% ( 117) 00:07:52.728 8822.154 - 8872.566: 66.6242% ( 104) 00:07:52.728 8872.566 - 8922.978: 67.3015% ( 101) 00:07:52.728 8922.978 - 8973.391: 67.9185% ( 92) 00:07:52.728 8973.391 - 9023.803: 68.5220% ( 90) 00:07:52.728 9023.803 - 9074.215: 69.0719% ( 82) 00:07:52.728 9074.215 - 9124.628: 69.5614% ( 73) 00:07:52.728 9124.628 - 9175.040: 70.0376% ( 71) 00:07:52.728 9175.040 - 9225.452: 70.5204% ( 72) 00:07:52.728 9225.452 - 9275.865: 70.9563% ( 65) 00:07:52.728 9275.865 - 9326.277: 71.3519% ( 59) 00:07:52.728 9326.277 - 9376.689: 71.7744% ( 63) 00:07:52.728 9376.689 - 9427.102: 72.1969% ( 63) 00:07:52.728 9427.102 - 9477.514: 72.5925% ( 59) 00:07:52.728 9477.514 - 9527.926: 72.9882% ( 59) 00:07:52.728 9527.926 - 9578.338: 73.4107% ( 63) 00:07:52.728 9578.338 - 9628.751: 73.8734% ( 69) 00:07:52.728 9628.751 - 9679.163: 74.3160% ( 66) 00:07:52.728 9679.163 - 9729.575: 74.7452% ( 64) 00:07:52.728 9729.575 - 9779.988: 75.1744% ( 64) 00:07:52.728 9779.988 - 9830.400: 75.6102% ( 65) 00:07:52.728 9830.400 - 9880.812: 76.0260% ( 62) 00:07:52.728 9880.812 - 9931.225: 76.4619% ( 65) 00:07:52.728 9931.225 - 9981.637: 76.9380% ( 71) 00:07:52.728 9981.637 - 10032.049: 77.3873% ( 67) 00:07:52.728 10032.049 - 10082.462: 77.8299% ( 66) 00:07:52.728 10082.462 - 10132.874: 78.2859% ( 68) 00:07:52.728 10132.874 - 10183.286: 78.7420% ( 68) 00:07:52.728 10183.286 - 10233.698: 79.1845% ( 66) 00:07:52.728 10233.698 - 10284.111: 79.6137% ( 64) 00:07:52.728 10284.111 - 10334.523: 79.9893% ( 56) 00:07:52.728 10334.523 - 10384.935: 80.3782% ( 58) 00:07:52.728 10384.935 - 10435.348: 80.7202% ( 51) 00:07:52.728 10435.348 - 10485.760: 81.0555% ( 50) 00:07:52.728 10485.760 - 10536.172: 81.3908% ( 50) 00:07:52.729 10536.172 - 10586.585: 81.7127% ( 48) 00:07:52.729 10586.585 - 10636.997: 82.0346% ( 48) 00:07:52.729 10636.997 - 10687.409: 82.4236% ( 58) 00:07:52.729 10687.409 - 10737.822: 82.7790% ( 53) 00:07:52.729 10737.822 - 10788.234: 83.1143% ( 50) 00:07:52.729 10788.234 - 10838.646: 83.4227% ( 46) 00:07:52.729 10838.646 - 10889.058: 83.6843% ( 39) 00:07:52.729 10889.058 - 10939.471: 83.9861% ( 45) 00:07:52.729 10939.471 - 10989.883: 84.2677% ( 42) 00:07:52.729 10989.883 - 11040.295: 84.5292% ( 39) 00:07:52.729 11040.295 - 11090.708: 84.8176% ( 43) 00:07:52.729 11090.708 - 11141.120: 85.0992% ( 42) 00:07:52.729 11141.120 - 11191.532: 85.3742% ( 41) 00:07:52.729 11191.532 - 11241.945: 85.6961% ( 48) 00:07:52.729 11241.945 - 11292.357: 85.9979% ( 45) 00:07:52.729 11292.357 - 11342.769: 86.2728% ( 41) 00:07:52.729 11342.769 - 11393.182: 86.5209% ( 37) 00:07:52.729 11393.182 - 11443.594: 86.7959% ( 41) 00:07:52.729 11443.594 - 11494.006: 87.0574% ( 39) 00:07:52.729 11494.006 - 11544.418: 87.2921% ( 35) 00:07:52.729 11544.418 - 11594.831: 87.4866% ( 29) 00:07:52.729 11594.831 - 11645.243: 87.7213% ( 35) 00:07:52.729 11645.243 - 11695.655: 87.9828% ( 39) 00:07:52.729 11695.655 - 11746.068: 88.2444% ( 39) 00:07:52.729 11746.068 - 11796.480: 88.4791% ( 35) 00:07:52.729 11796.480 - 11846.892: 88.7071% ( 34) 00:07:52.729 11846.892 - 11897.305: 88.9083% ( 30) 00:07:52.729 11897.305 - 11947.717: 89.1497% ( 36) 00:07:52.729 11947.717 - 11998.129: 89.3442% ( 29) 00:07:52.729 11998.129 - 12048.542: 89.5587% ( 32) 00:07:52.729 12048.542 - 12098.954: 89.7666% ( 31) 00:07:52.729 12098.954 - 12149.366: 89.9745% ( 31) 00:07:52.729 12149.366 - 12199.778: 90.2159% ( 36) 00:07:52.729 12199.778 - 12250.191: 90.4372% ( 33) 00:07:52.729 12250.191 - 12300.603: 90.6652% ( 34) 00:07:52.729 12300.603 - 12351.015: 90.8932% ( 34) 00:07:52.729 12351.015 - 12401.428: 91.1011% ( 31) 00:07:52.729 12401.428 - 12451.840: 91.3224% ( 33) 00:07:52.729 12451.840 - 12502.252: 91.5370% ( 32) 00:07:52.729 12502.252 - 12552.665: 91.7516% ( 32) 00:07:52.729 12552.665 - 12603.077: 91.9461% ( 29) 00:07:52.729 12603.077 - 12653.489: 92.1473% ( 30) 00:07:52.729 12653.489 - 12703.902: 92.3283% ( 27) 00:07:52.729 12703.902 - 12754.314: 92.5362% ( 31) 00:07:52.729 12754.314 - 12804.726: 92.6703% ( 20) 00:07:52.729 12804.726 - 12855.138: 92.7977% ( 19) 00:07:52.729 12855.138 - 12905.551: 92.9252% ( 19) 00:07:52.729 12905.551 - 13006.375: 93.2001% ( 41) 00:07:52.729 13006.375 - 13107.200: 93.4147% ( 32) 00:07:52.729 13107.200 - 13208.025: 93.6494% ( 35) 00:07:52.729 13208.025 - 13308.849: 93.9177% ( 40) 00:07:52.729 13308.849 - 13409.674: 94.1457% ( 34) 00:07:52.729 13409.674 - 13510.498: 94.3535% ( 31) 00:07:52.729 13510.498 - 13611.323: 94.5815% ( 34) 00:07:52.729 13611.323 - 13712.148: 94.8297% ( 37) 00:07:52.729 13712.148 - 13812.972: 95.0443% ( 32) 00:07:52.729 13812.972 - 13913.797: 95.2589% ( 32) 00:07:52.729 13913.797 - 14014.622: 95.4198% ( 24) 00:07:52.729 14014.622 - 14115.446: 95.6210% ( 30) 00:07:52.729 14115.446 - 14216.271: 95.7685% ( 22) 00:07:52.729 14216.271 - 14317.095: 95.9295% ( 24) 00:07:52.729 14317.095 - 14417.920: 96.1239% ( 29) 00:07:52.729 14417.920 - 14518.745: 96.3452% ( 33) 00:07:52.729 14518.745 - 14619.569: 96.6202% ( 41) 00:07:52.729 14619.569 - 14720.394: 96.8549% ( 35) 00:07:52.729 14720.394 - 14821.218: 97.0762% ( 33) 00:07:52.729 14821.218 - 14922.043: 97.2908% ( 32) 00:07:52.729 14922.043 - 15022.868: 97.4517% ( 24) 00:07:52.729 15022.868 - 15123.692: 97.6060% ( 23) 00:07:52.729 15123.692 - 15224.517: 97.7468% ( 21) 00:07:52.729 15224.517 - 15325.342: 97.8943% ( 22) 00:07:52.729 15325.342 - 15426.166: 98.0150% ( 18) 00:07:52.729 15426.166 - 15526.991: 98.1089% ( 14) 00:07:52.729 15526.991 - 15627.815: 98.1760% ( 10) 00:07:52.729 15627.815 - 15728.640: 98.2363% ( 9) 00:07:52.729 15728.640 - 15829.465: 98.3101% ( 11) 00:07:52.729 15829.465 - 15930.289: 98.3771% ( 10) 00:07:52.729 15930.289 - 16031.114: 98.4509% ( 11) 00:07:52.729 16031.114 - 16131.938: 98.4979% ( 7) 00:07:52.729 16131.938 - 16232.763: 98.5649% ( 10) 00:07:52.729 16232.763 - 16333.588: 98.6387% ( 11) 00:07:52.729 16333.588 - 16434.412: 98.6990% ( 9) 00:07:52.729 16434.412 - 16535.237: 98.7661% ( 10) 00:07:52.729 16535.237 - 16636.062: 98.8264% ( 9) 00:07:52.729 16636.062 - 16736.886: 98.8801% ( 8) 00:07:52.729 16736.886 - 16837.711: 98.9069% ( 4) 00:07:52.729 16837.711 - 16938.535: 98.9337% ( 4) 00:07:52.729 16938.535 - 17039.360: 98.9539% ( 3) 00:07:52.729 17039.360 - 17140.185: 98.9807% ( 4) 00:07:52.729 17140.185 - 17241.009: 99.0008% ( 3) 00:07:52.729 17241.009 - 17341.834: 99.0276% ( 4) 00:07:52.729 17341.834 - 17442.658: 99.0477% ( 3) 00:07:52.729 17442.658 - 17543.483: 99.0746% ( 4) 00:07:52.729 17543.483 - 17644.308: 99.0947% ( 3) 00:07:52.729 17644.308 - 17745.132: 99.1215% ( 4) 00:07:52.729 17745.132 - 17845.957: 99.1416% ( 3) 00:07:52.729 18047.606 - 18148.431: 99.1483% ( 1) 00:07:52.729 18148.431 - 18249.255: 99.1886% ( 6) 00:07:52.729 18249.255 - 18350.080: 99.2288% ( 6) 00:07:52.729 18350.080 - 18450.905: 99.2623% ( 5) 00:07:52.729 18450.905 - 18551.729: 99.2959% ( 5) 00:07:52.729 18551.729 - 18652.554: 99.3361% ( 6) 00:07:52.729 18652.554 - 18753.378: 99.3763% ( 6) 00:07:52.729 18753.378 - 18854.203: 99.4099% ( 5) 00:07:52.729 18854.203 - 18955.028: 99.4501% ( 6) 00:07:52.729 18955.028 - 19055.852: 99.4903% ( 6) 00:07:52.729 19055.852 - 19156.677: 99.5239% ( 5) 00:07:52.729 19156.677 - 19257.502: 99.5641% ( 6) 00:07:52.729 19257.502 - 19358.326: 99.5708% ( 1) 00:07:52.729 22080.591 - 22181.415: 99.5976% ( 4) 00:07:52.729 22181.415 - 22282.240: 99.6178% ( 3) 00:07:52.729 22282.240 - 22383.065: 99.6446% ( 4) 00:07:52.729 22383.065 - 22483.889: 99.6647% ( 3) 00:07:52.729 22483.889 - 22584.714: 99.6915% ( 4) 00:07:52.729 22584.714 - 22685.538: 99.7183% ( 4) 00:07:52.729 22685.538 - 22786.363: 99.7385% ( 3) 00:07:52.729 22786.363 - 22887.188: 99.7653% ( 4) 00:07:52.729 22887.188 - 22988.012: 99.7854% ( 3) 00:07:52.729 22988.012 - 23088.837: 99.8122% ( 4) 00:07:52.729 23088.837 - 23189.662: 99.8323% ( 3) 00:07:52.729 23189.662 - 23290.486: 99.8592% ( 4) 00:07:52.729 23290.486 - 23391.311: 99.8793% ( 3) 00:07:52.729 23391.311 - 23492.135: 99.9061% ( 4) 00:07:52.729 23492.135 - 23592.960: 99.9329% ( 4) 00:07:52.729 23592.960 - 23693.785: 99.9531% ( 3) 00:07:52.729 23693.785 - 23794.609: 99.9799% ( 4) 00:07:52.729 23794.609 - 23895.434: 100.0000% ( 3) 00:07:52.729 00:07:52.729 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:52.729 ============================================================================== 00:07:52.729 Range in us Cumulative IO count 00:07:52.729 5822.622 - 5847.828: 0.0067% ( 1) 00:07:52.729 5847.828 - 5873.034: 0.0335% ( 4) 00:07:52.729 5873.034 - 5898.240: 0.0738% ( 6) 00:07:52.729 5898.240 - 5923.446: 0.1609% ( 13) 00:07:52.729 5923.446 - 5948.652: 0.3353% ( 26) 00:07:52.729 5948.652 - 5973.858: 0.5901% ( 38) 00:07:52.729 5973.858 - 5999.065: 0.8383% ( 37) 00:07:52.729 5999.065 - 6024.271: 1.1400% ( 45) 00:07:52.729 6024.271 - 6049.477: 1.5223% ( 57) 00:07:52.729 6049.477 - 6074.683: 2.0185% ( 74) 00:07:52.729 6074.683 - 6099.889: 2.5751% ( 83) 00:07:52.729 6099.889 - 6125.095: 3.2323% ( 98) 00:07:52.730 6125.095 - 6150.302: 3.9565% ( 108) 00:07:52.730 6150.302 - 6175.508: 4.7479% ( 118) 00:07:52.730 6175.508 - 6200.714: 5.6062% ( 128) 00:07:52.730 6200.714 - 6225.920: 6.4042% ( 119) 00:07:52.730 6225.920 - 6251.126: 7.2827% ( 131) 00:07:52.730 6251.126 - 6276.332: 8.3020% ( 152) 00:07:52.730 6276.332 - 6301.538: 9.3415% ( 155) 00:07:52.730 6301.538 - 6326.745: 10.3943% ( 157) 00:07:52.730 6326.745 - 6351.951: 11.4539% ( 158) 00:07:52.730 6351.951 - 6377.157: 12.5201% ( 159) 00:07:52.730 6377.157 - 6402.363: 13.5931% ( 160) 00:07:52.730 6402.363 - 6427.569: 14.6660% ( 160) 00:07:52.730 6427.569 - 6452.775: 15.7792% ( 166) 00:07:52.730 6452.775 - 6503.188: 17.9654% ( 326) 00:07:52.730 6503.188 - 6553.600: 20.2052% ( 334) 00:07:52.730 6553.600 - 6604.012: 22.4718% ( 338) 00:07:52.730 6604.012 - 6654.425: 24.8726% ( 358) 00:07:52.730 6654.425 - 6704.837: 27.2264% ( 351) 00:07:52.730 6704.837 - 6755.249: 29.6070% ( 355) 00:07:52.730 6755.249 - 6805.662: 31.9675% ( 352) 00:07:52.730 6805.662 - 6856.074: 34.0397% ( 309) 00:07:52.730 6856.074 - 6906.486: 35.9308% ( 282) 00:07:52.730 6906.486 - 6956.898: 37.6542% ( 257) 00:07:52.730 6956.898 - 7007.311: 38.9351% ( 191) 00:07:52.730 7007.311 - 7057.723: 40.0751% ( 170) 00:07:52.730 7057.723 - 7108.135: 40.9268% ( 127) 00:07:52.730 7108.135 - 7158.548: 41.6510% ( 108) 00:07:52.730 7158.548 - 7208.960: 42.2680% ( 92) 00:07:52.730 7208.960 - 7259.372: 42.8179% ( 82) 00:07:52.730 7259.372 - 7309.785: 43.3476% ( 79) 00:07:52.730 7309.785 - 7360.197: 43.9445% ( 89) 00:07:52.730 7360.197 - 7410.609: 44.4273% ( 72) 00:07:52.730 7410.609 - 7461.022: 44.8632% ( 65) 00:07:52.730 7461.022 - 7511.434: 45.3863% ( 78) 00:07:52.730 7511.434 - 7561.846: 45.8959% ( 76) 00:07:52.730 7561.846 - 7612.258: 46.3989% ( 75) 00:07:52.730 7612.258 - 7662.671: 46.9555% ( 83) 00:07:52.730 7662.671 - 7713.083: 47.5389% ( 87) 00:07:52.730 7713.083 - 7763.495: 48.3101% ( 115) 00:07:52.730 7763.495 - 7813.908: 49.0142% ( 105) 00:07:52.730 7813.908 - 7864.320: 49.6647% ( 97) 00:07:52.730 7864.320 - 7914.732: 50.3554% ( 103) 00:07:52.730 7914.732 - 7965.145: 51.0394% ( 102) 00:07:52.730 7965.145 - 8015.557: 51.7972% ( 113) 00:07:52.730 8015.557 - 8065.969: 52.6086% ( 121) 00:07:52.730 8065.969 - 8116.382: 53.4737% ( 129) 00:07:52.730 8116.382 - 8166.794: 54.3455% ( 130) 00:07:52.730 8166.794 - 8217.206: 55.2374% ( 133) 00:07:52.730 8217.206 - 8267.618: 56.1226% ( 132) 00:07:52.730 8267.618 - 8318.031: 56.9206% ( 119) 00:07:52.730 8318.031 - 8368.443: 57.8259% ( 135) 00:07:52.730 8368.443 - 8418.855: 58.7312% ( 135) 00:07:52.730 8418.855 - 8469.268: 59.6969% ( 144) 00:07:52.730 8469.268 - 8519.680: 60.6693% ( 145) 00:07:52.730 8519.680 - 8570.092: 61.5947% ( 138) 00:07:52.730 8570.092 - 8620.505: 62.5000% ( 135) 00:07:52.730 8620.505 - 8670.917: 63.3919% ( 133) 00:07:52.730 8670.917 - 8721.329: 64.2234% ( 124) 00:07:52.730 8721.329 - 8771.742: 65.0885% ( 129) 00:07:52.730 8771.742 - 8822.154: 65.9201% ( 124) 00:07:52.730 8822.154 - 8872.566: 66.6644% ( 111) 00:07:52.730 8872.566 - 8922.978: 67.3216% ( 98) 00:07:52.730 8922.978 - 8973.391: 67.9386% ( 92) 00:07:52.730 8973.391 - 9023.803: 68.4885% ( 82) 00:07:52.730 9023.803 - 9074.215: 69.0317% ( 81) 00:07:52.730 9074.215 - 9124.628: 69.5681% ( 80) 00:07:52.730 9124.628 - 9175.040: 70.1180% ( 82) 00:07:52.730 9175.040 - 9225.452: 70.6210% ( 75) 00:07:52.730 9225.452 - 9275.865: 71.1440% ( 78) 00:07:52.730 9275.865 - 9326.277: 71.6202% ( 71) 00:07:52.730 9326.277 - 9376.689: 72.1164% ( 74) 00:07:52.730 9376.689 - 9427.102: 72.6194% ( 75) 00:07:52.730 9427.102 - 9477.514: 73.0687% ( 67) 00:07:52.730 9477.514 - 9527.926: 73.4979% ( 64) 00:07:52.730 9527.926 - 9578.338: 73.9405% ( 66) 00:07:52.730 9578.338 - 9628.751: 74.3898% ( 67) 00:07:52.730 9628.751 - 9679.163: 74.7988% ( 61) 00:07:52.730 9679.163 - 9729.575: 75.1945% ( 59) 00:07:52.730 9729.575 - 9779.988: 75.5901% ( 59) 00:07:52.730 9779.988 - 9830.400: 75.9590% ( 55) 00:07:52.730 9830.400 - 9880.812: 76.3412% ( 57) 00:07:52.730 9880.812 - 9931.225: 76.6966% ( 53) 00:07:52.730 9931.225 - 9981.637: 77.0386% ( 51) 00:07:52.730 9981.637 - 10032.049: 77.3806% ( 51) 00:07:52.730 10032.049 - 10082.462: 77.6891% ( 46) 00:07:52.730 10082.462 - 10132.874: 78.0512% ( 54) 00:07:52.730 10132.874 - 10183.286: 78.3731% ( 48) 00:07:52.730 10183.286 - 10233.698: 78.7554% ( 57) 00:07:52.730 10233.698 - 10284.111: 79.1510% ( 59) 00:07:52.730 10284.111 - 10334.523: 79.4997% ( 52) 00:07:52.730 10334.523 - 10384.935: 79.8820% ( 57) 00:07:52.730 10384.935 - 10435.348: 80.2843% ( 60) 00:07:52.730 10435.348 - 10485.760: 80.6196% ( 50) 00:07:52.730 10485.760 - 10536.172: 81.0086% ( 58) 00:07:52.730 10536.172 - 10586.585: 81.3774% ( 55) 00:07:52.730 10586.585 - 10636.997: 81.7597% ( 57) 00:07:52.730 10636.997 - 10687.409: 82.1017% ( 51) 00:07:52.730 10687.409 - 10737.822: 82.4236% ( 48) 00:07:52.730 10737.822 - 10788.234: 82.7320% ( 46) 00:07:52.730 10788.234 - 10838.646: 83.0271% ( 44) 00:07:52.730 10838.646 - 10889.058: 83.3289% ( 45) 00:07:52.730 10889.058 - 10939.471: 83.6306% ( 45) 00:07:52.730 10939.471 - 10989.883: 83.9190% ( 43) 00:07:52.730 10989.883 - 11040.295: 84.1805% ( 39) 00:07:52.730 11040.295 - 11090.708: 84.4622% ( 42) 00:07:52.730 11090.708 - 11141.120: 84.7572% ( 44) 00:07:52.730 11141.120 - 11191.532: 85.0523% ( 44) 00:07:52.730 11191.532 - 11241.945: 85.3138% ( 39) 00:07:52.730 11241.945 - 11292.357: 85.5888% ( 41) 00:07:52.730 11292.357 - 11342.769: 85.8637% ( 41) 00:07:52.730 11342.769 - 11393.182: 86.1052% ( 36) 00:07:52.730 11393.182 - 11443.594: 86.4069% ( 45) 00:07:52.730 11443.594 - 11494.006: 86.7020% ( 44) 00:07:52.730 11494.006 - 11544.418: 86.9702% ( 40) 00:07:52.730 11544.418 - 11594.831: 87.2452% ( 41) 00:07:52.730 11594.831 - 11645.243: 87.5402% ( 44) 00:07:52.730 11645.243 - 11695.655: 87.8018% ( 39) 00:07:52.730 11695.655 - 11746.068: 88.0901% ( 43) 00:07:52.730 11746.068 - 11796.480: 88.3651% ( 41) 00:07:52.730 11796.480 - 11846.892: 88.6601% ( 44) 00:07:52.730 11846.892 - 11897.305: 88.9619% ( 45) 00:07:52.730 11897.305 - 11947.717: 89.2369% ( 41) 00:07:52.730 11947.717 - 11998.129: 89.5051% ( 40) 00:07:52.730 11998.129 - 12048.542: 89.7331% ( 34) 00:07:52.730 12048.542 - 12098.954: 89.9477% ( 32) 00:07:52.730 12098.954 - 12149.366: 90.1690% ( 33) 00:07:52.730 12149.366 - 12199.778: 90.4104% ( 36) 00:07:52.730 12199.778 - 12250.191: 90.6384% ( 34) 00:07:52.730 12250.191 - 12300.603: 90.8731% ( 35) 00:07:52.730 12300.603 - 12351.015: 91.0944% ( 33) 00:07:52.730 12351.015 - 12401.428: 91.2688% ( 26) 00:07:52.730 12401.428 - 12451.840: 91.4834% ( 32) 00:07:52.730 12451.840 - 12502.252: 91.6376% ( 23) 00:07:52.730 12502.252 - 12552.665: 91.7583% ( 18) 00:07:52.730 12552.665 - 12603.077: 91.8723% ( 17) 00:07:52.730 12603.077 - 12653.489: 91.9930% ( 18) 00:07:52.730 12653.489 - 12703.902: 92.0802% ( 13) 00:07:52.730 12703.902 - 12754.314: 92.1942% ( 17) 00:07:52.730 12754.314 - 12804.726: 92.2948% ( 15) 00:07:52.730 12804.726 - 12855.138: 92.4088% ( 17) 00:07:52.730 12855.138 - 12905.551: 92.5228% ( 17) 00:07:52.730 12905.551 - 13006.375: 92.7642% ( 36) 00:07:52.730 13006.375 - 13107.200: 93.0123% ( 37) 00:07:52.730 13107.200 - 13208.025: 93.2739% ( 39) 00:07:52.730 13208.025 - 13308.849: 93.5354% ( 39) 00:07:52.730 13308.849 - 13409.674: 93.7835% ( 37) 00:07:52.730 13409.674 - 13510.498: 94.0384% ( 38) 00:07:52.730 13510.498 - 13611.323: 94.2865% ( 37) 00:07:52.730 13611.323 - 13712.148: 94.6084% ( 48) 00:07:52.730 13712.148 - 13812.972: 94.9571% ( 52) 00:07:52.730 13812.972 - 13913.797: 95.3058% ( 52) 00:07:52.731 13913.797 - 14014.622: 95.6143% ( 46) 00:07:52.731 14014.622 - 14115.446: 95.8758% ( 39) 00:07:52.731 14115.446 - 14216.271: 96.1306% ( 38) 00:07:52.731 14216.271 - 14317.095: 96.3720% ( 36) 00:07:52.731 14317.095 - 14417.920: 96.6269% ( 38) 00:07:52.731 14417.920 - 14518.745: 96.8616% ( 35) 00:07:52.731 14518.745 - 14619.569: 97.0427% ( 27) 00:07:52.731 14619.569 - 14720.394: 97.1902% ( 22) 00:07:52.731 14720.394 - 14821.218: 97.3243% ( 20) 00:07:52.731 14821.218 - 14922.043: 97.5188% ( 29) 00:07:52.731 14922.043 - 15022.868: 97.6797% ( 24) 00:07:52.731 15022.868 - 15123.692: 97.8004% ( 18) 00:07:52.731 15123.692 - 15224.517: 97.9278% ( 19) 00:07:52.731 15224.517 - 15325.342: 98.0754% ( 22) 00:07:52.731 15325.342 - 15426.166: 98.1961% ( 18) 00:07:52.731 15426.166 - 15526.991: 98.3101% ( 17) 00:07:52.731 15526.991 - 15627.815: 98.3973% ( 13) 00:07:52.731 15627.815 - 15728.640: 98.4375% ( 6) 00:07:52.731 15728.640 - 15829.465: 98.4979% ( 9) 00:07:52.731 15829.465 - 15930.289: 98.5448% ( 7) 00:07:52.731 15930.289 - 16031.114: 98.5984% ( 8) 00:07:52.731 16031.114 - 16131.938: 98.6521% ( 8) 00:07:52.731 16131.938 - 16232.763: 98.6856% ( 5) 00:07:52.731 16232.763 - 16333.588: 98.7124% ( 4) 00:07:52.731 16333.588 - 16434.412: 98.7460% ( 5) 00:07:52.731 16434.412 - 16535.237: 98.7862% ( 6) 00:07:52.731 16535.237 - 16636.062: 98.8197% ( 5) 00:07:52.731 16636.062 - 16736.886: 98.8600% ( 6) 00:07:52.731 16736.886 - 16837.711: 98.8935% ( 5) 00:07:52.731 16837.711 - 16938.535: 98.9337% ( 6) 00:07:52.731 16938.535 - 17039.360: 98.9740% ( 6) 00:07:52.731 17039.360 - 17140.185: 99.0142% ( 6) 00:07:52.731 17140.185 - 17241.009: 99.0477% ( 5) 00:07:52.731 17241.009 - 17341.834: 99.0880% ( 6) 00:07:52.731 17341.834 - 17442.658: 99.1282% ( 6) 00:07:52.731 17442.658 - 17543.483: 99.1483% ( 3) 00:07:52.731 17543.483 - 17644.308: 99.1886% ( 6) 00:07:52.731 17644.308 - 17745.132: 99.2221% ( 5) 00:07:52.731 17745.132 - 17845.957: 99.2623% ( 6) 00:07:52.731 17845.957 - 17946.782: 99.3026% ( 6) 00:07:52.731 17946.782 - 18047.606: 99.3361% ( 5) 00:07:52.731 18047.606 - 18148.431: 99.3763% ( 6) 00:07:52.731 18148.431 - 18249.255: 99.4166% ( 6) 00:07:52.731 18249.255 - 18350.080: 99.4501% ( 5) 00:07:52.731 18350.080 - 18450.905: 99.4836% ( 5) 00:07:52.731 18450.905 - 18551.729: 99.5239% ( 6) 00:07:52.731 18551.729 - 18652.554: 99.5641% ( 6) 00:07:52.731 18652.554 - 18753.378: 99.5708% ( 1) 00:07:52.731 20467.397 - 20568.222: 99.5842% ( 2) 00:07:52.731 20568.222 - 20669.046: 99.6043% ( 3) 00:07:52.731 20669.046 - 20769.871: 99.6312% ( 4) 00:07:52.731 20769.871 - 20870.695: 99.6513% ( 3) 00:07:52.731 20870.695 - 20971.520: 99.6714% ( 3) 00:07:52.731 20971.520 - 21072.345: 99.6915% ( 3) 00:07:52.731 21072.345 - 21173.169: 99.7183% ( 4) 00:07:52.731 21173.169 - 21273.994: 99.7452% ( 4) 00:07:52.731 21273.994 - 21374.818: 99.7653% ( 3) 00:07:52.731 21374.818 - 21475.643: 99.7921% ( 4) 00:07:52.731 21475.643 - 21576.468: 99.8122% ( 3) 00:07:52.731 21576.468 - 21677.292: 99.8391% ( 4) 00:07:52.731 21677.292 - 21778.117: 99.8659% ( 4) 00:07:52.731 21778.117 - 21878.942: 99.8860% ( 3) 00:07:52.731 21878.942 - 21979.766: 99.9128% ( 4) 00:07:52.731 21979.766 - 22080.591: 99.9329% ( 3) 00:07:52.731 22080.591 - 22181.415: 99.9598% ( 4) 00:07:52.731 22181.415 - 22282.240: 99.9799% ( 3) 00:07:52.731 22282.240 - 22383.065: 100.0000% ( 3) 00:07:52.731 00:07:52.731 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:52.731 ============================================================================== 00:07:52.731 Range in us Cumulative IO count 00:07:52.731 5822.622 - 5847.828: 0.0067% ( 1) 00:07:52.731 5847.828 - 5873.034: 0.0469% ( 6) 00:07:52.731 5873.034 - 5898.240: 0.1073% ( 9) 00:07:52.731 5898.240 - 5923.446: 0.1744% ( 10) 00:07:52.731 5923.446 - 5948.652: 0.3487% ( 26) 00:07:52.731 5948.652 - 5973.858: 0.5432% ( 29) 00:07:52.731 5973.858 - 5999.065: 0.7846% ( 36) 00:07:52.731 5999.065 - 6024.271: 1.1467% ( 54) 00:07:52.731 6024.271 - 6049.477: 1.5290% ( 57) 00:07:52.731 6049.477 - 6074.683: 1.9783% ( 67) 00:07:52.731 6074.683 - 6099.889: 2.5483% ( 85) 00:07:52.731 6099.889 - 6125.095: 3.2055% ( 98) 00:07:52.731 6125.095 - 6150.302: 3.9431% ( 110) 00:07:52.731 6150.302 - 6175.508: 4.7479% ( 120) 00:07:52.731 6175.508 - 6200.714: 5.5123% ( 114) 00:07:52.731 6200.714 - 6225.920: 6.4109% ( 134) 00:07:52.731 6225.920 - 6251.126: 7.3833% ( 145) 00:07:52.731 6251.126 - 6276.332: 8.4295% ( 156) 00:07:52.731 6276.332 - 6301.538: 9.4555% ( 153) 00:07:52.731 6301.538 - 6326.745: 10.4681% ( 151) 00:07:52.731 6326.745 - 6351.951: 11.4606% ( 148) 00:07:52.731 6351.951 - 6377.157: 12.5402% ( 161) 00:07:52.731 6377.157 - 6402.363: 13.5730% ( 154) 00:07:52.731 6402.363 - 6427.569: 14.6325% ( 158) 00:07:52.731 6427.569 - 6452.775: 15.7591% ( 168) 00:07:52.731 6452.775 - 6503.188: 17.9050% ( 320) 00:07:52.731 6503.188 - 6553.600: 20.1314% ( 332) 00:07:52.731 6553.600 - 6604.012: 22.4048% ( 339) 00:07:52.731 6604.012 - 6654.425: 24.6781% ( 339) 00:07:52.731 6654.425 - 6704.837: 27.0252% ( 350) 00:07:52.731 6704.837 - 6755.249: 29.3522% ( 347) 00:07:52.731 6755.249 - 6805.662: 31.6121% ( 337) 00:07:52.731 6805.662 - 6856.074: 33.6306% ( 301) 00:07:52.731 6856.074 - 6906.486: 35.4949% ( 278) 00:07:52.731 6906.486 - 6956.898: 37.1178% ( 242) 00:07:52.731 6956.898 - 7007.311: 38.3785% ( 188) 00:07:52.731 7007.311 - 7057.723: 39.4850% ( 165) 00:07:52.731 7057.723 - 7108.135: 40.5177% ( 154) 00:07:52.731 7108.135 - 7158.548: 41.2554% ( 110) 00:07:52.731 7158.548 - 7208.960: 41.9394% ( 102) 00:07:52.731 7208.960 - 7259.372: 42.5832% ( 96) 00:07:52.731 7259.372 - 7309.785: 43.1867% ( 90) 00:07:52.731 7309.785 - 7360.197: 43.7634% ( 86) 00:07:52.731 7360.197 - 7410.609: 44.3468% ( 87) 00:07:52.731 7410.609 - 7461.022: 44.8833% ( 80) 00:07:52.731 7461.022 - 7511.434: 45.4600% ( 86) 00:07:52.731 7511.434 - 7561.846: 46.1642% ( 105) 00:07:52.731 7561.846 - 7612.258: 46.7275% ( 84) 00:07:52.731 7612.258 - 7662.671: 47.3377% ( 91) 00:07:52.731 7662.671 - 7713.083: 48.1491% ( 121) 00:07:52.731 7713.083 - 7763.495: 48.9405% ( 118) 00:07:52.731 7763.495 - 7813.908: 49.6647% ( 108) 00:07:52.731 7813.908 - 7864.320: 50.4091% ( 111) 00:07:52.731 7864.320 - 7914.732: 51.1668% ( 113) 00:07:52.731 7914.732 - 7965.145: 51.9582% ( 118) 00:07:52.731 7965.145 - 8015.557: 52.8232% ( 129) 00:07:52.731 8015.557 - 8065.969: 53.6682% ( 126) 00:07:52.731 8065.969 - 8116.382: 54.4997% ( 124) 00:07:52.731 8116.382 - 8166.794: 55.3112% ( 121) 00:07:52.731 8166.794 - 8217.206: 56.0622% ( 112) 00:07:52.731 8217.206 - 8267.618: 56.8334% ( 115) 00:07:52.731 8267.618 - 8318.031: 57.5443% ( 106) 00:07:52.731 8318.031 - 8368.443: 58.3155% ( 115) 00:07:52.731 8368.443 - 8418.855: 59.1403% ( 123) 00:07:52.731 8418.855 - 8469.268: 59.9316% ( 118) 00:07:52.731 8469.268 - 8519.680: 60.7363% ( 120) 00:07:52.732 8519.680 - 8570.092: 61.5612% ( 123) 00:07:52.732 8570.092 - 8620.505: 62.3860% ( 123) 00:07:52.732 8620.505 - 8670.917: 63.1840% ( 119) 00:07:52.732 8670.917 - 8721.329: 63.9887% ( 120) 00:07:52.732 8721.329 - 8771.742: 64.7666% ( 116) 00:07:52.732 8771.742 - 8822.154: 65.4439% ( 101) 00:07:52.732 8822.154 - 8872.566: 66.1011% ( 98) 00:07:52.732 8872.566 - 8922.978: 66.6376% ( 80) 00:07:52.732 8922.978 - 8973.391: 67.1473% ( 76) 00:07:52.732 8973.391 - 9023.803: 67.6368% ( 73) 00:07:52.732 9023.803 - 9074.215: 68.1330% ( 74) 00:07:52.732 9074.215 - 9124.628: 68.6628% ( 79) 00:07:52.732 9124.628 - 9175.040: 69.1658% ( 75) 00:07:52.732 9175.040 - 9225.452: 69.7090% ( 81) 00:07:52.732 9225.452 - 9275.865: 70.2186% ( 76) 00:07:52.732 9275.865 - 9326.277: 70.7618% ( 81) 00:07:52.732 9326.277 - 9376.689: 71.2446% ( 72) 00:07:52.732 9376.689 - 9427.102: 71.7073% ( 69) 00:07:52.732 9427.102 - 9477.514: 72.2036% ( 74) 00:07:52.732 9477.514 - 9527.926: 72.6596% ( 68) 00:07:52.732 9527.926 - 9578.338: 73.1760% ( 77) 00:07:52.732 9578.338 - 9628.751: 73.7192% ( 81) 00:07:52.732 9628.751 - 9679.163: 74.2087% ( 73) 00:07:52.732 9679.163 - 9729.575: 74.7049% ( 74) 00:07:52.732 9729.575 - 9779.988: 75.1542% ( 67) 00:07:52.732 9779.988 - 9830.400: 75.6304% ( 71) 00:07:52.732 9830.400 - 9880.812: 76.0663% ( 65) 00:07:52.732 9880.812 - 9931.225: 76.4150% ( 52) 00:07:52.732 9931.225 - 9981.637: 76.8039% ( 58) 00:07:52.732 9981.637 - 10032.049: 77.2197% ( 62) 00:07:52.732 10032.049 - 10082.462: 77.5483% ( 49) 00:07:52.732 10082.462 - 10132.874: 77.9372% ( 58) 00:07:52.732 10132.874 - 10183.286: 78.3262% ( 58) 00:07:52.732 10183.286 - 10233.698: 78.7151% ( 58) 00:07:52.732 10233.698 - 10284.111: 79.1108% ( 59) 00:07:52.732 10284.111 - 10334.523: 79.5668% ( 68) 00:07:52.732 10334.523 - 10384.935: 79.9826% ( 62) 00:07:52.732 10384.935 - 10435.348: 80.3246% ( 51) 00:07:52.732 10435.348 - 10485.760: 80.6398% ( 47) 00:07:52.732 10485.760 - 10536.172: 80.9080% ( 40) 00:07:52.732 10536.172 - 10586.585: 81.2299% ( 48) 00:07:52.732 10586.585 - 10636.997: 81.5048% ( 41) 00:07:52.732 10636.997 - 10687.409: 81.8334% ( 49) 00:07:52.732 10687.409 - 10737.822: 82.2090% ( 56) 00:07:52.732 10737.822 - 10788.234: 82.5510% ( 51) 00:07:52.732 10788.234 - 10838.646: 82.9064% ( 53) 00:07:52.732 10838.646 - 10889.058: 83.2886% ( 57) 00:07:52.732 10889.058 - 10939.471: 83.6306% ( 51) 00:07:52.732 10939.471 - 10989.883: 83.9726% ( 51) 00:07:52.732 10989.883 - 11040.295: 84.2811% ( 46) 00:07:52.732 11040.295 - 11090.708: 84.6030% ( 48) 00:07:52.732 11090.708 - 11141.120: 84.9048% ( 45) 00:07:52.732 11141.120 - 11191.532: 85.2468% ( 51) 00:07:52.732 11191.532 - 11241.945: 85.6022% ( 53) 00:07:52.732 11241.945 - 11292.357: 85.9241% ( 48) 00:07:52.732 11292.357 - 11342.769: 86.2393% ( 47) 00:07:52.732 11342.769 - 11393.182: 86.5612% ( 48) 00:07:52.732 11393.182 - 11443.594: 86.8898% ( 49) 00:07:52.732 11443.594 - 11494.006: 87.1647% ( 41) 00:07:52.732 11494.006 - 11544.418: 87.4396% ( 41) 00:07:52.732 11544.418 - 11594.831: 87.6945% ( 38) 00:07:52.732 11594.831 - 11645.243: 87.9493% ( 38) 00:07:52.732 11645.243 - 11695.655: 88.2041% ( 38) 00:07:52.732 11695.655 - 11746.068: 88.4724% ( 40) 00:07:52.732 11746.068 - 11796.480: 88.7272% ( 38) 00:07:52.732 11796.480 - 11846.892: 88.9485% ( 33) 00:07:52.732 11846.892 - 11897.305: 89.1631% ( 32) 00:07:52.732 11897.305 - 11947.717: 89.3844% ( 33) 00:07:52.732 11947.717 - 11998.129: 89.6124% ( 34) 00:07:52.732 11998.129 - 12048.542: 89.8337% ( 33) 00:07:52.732 12048.542 - 12098.954: 90.0550% ( 33) 00:07:52.732 12098.954 - 12149.366: 90.2897% ( 35) 00:07:52.732 12149.366 - 12199.778: 90.4708% ( 27) 00:07:52.732 12199.778 - 12250.191: 90.6451% ( 26) 00:07:52.732 12250.191 - 12300.603: 90.7859% ( 21) 00:07:52.732 12300.603 - 12351.015: 90.9268% ( 21) 00:07:52.732 12351.015 - 12401.428: 91.0810% ( 23) 00:07:52.732 12401.428 - 12451.840: 91.2420% ( 24) 00:07:52.732 12451.840 - 12502.252: 91.3962% ( 23) 00:07:52.732 12502.252 - 12552.665: 91.5437% ( 22) 00:07:52.732 12552.665 - 12603.077: 91.6644% ( 18) 00:07:52.732 12603.077 - 12653.489: 91.7784% ( 17) 00:07:52.732 12653.489 - 12703.902: 91.9193% ( 21) 00:07:52.732 12703.902 - 12754.314: 92.0936% ( 26) 00:07:52.732 12754.314 - 12804.726: 92.2546% ( 24) 00:07:52.732 12804.726 - 12855.138: 92.3753% ( 18) 00:07:52.732 12855.138 - 12905.551: 92.5027% ( 19) 00:07:52.732 12905.551 - 13006.375: 92.7709% ( 40) 00:07:52.732 13006.375 - 13107.200: 93.0593% ( 43) 00:07:52.732 13107.200 - 13208.025: 93.3409% ( 42) 00:07:52.732 13208.025 - 13308.849: 93.5958% ( 38) 00:07:52.732 13308.849 - 13409.674: 93.8975% ( 45) 00:07:52.732 13409.674 - 13510.498: 94.2798% ( 57) 00:07:52.732 13510.498 - 13611.323: 94.6084% ( 49) 00:07:52.732 13611.323 - 13712.148: 94.9437% ( 50) 00:07:52.732 13712.148 - 13812.972: 95.2521% ( 46) 00:07:52.732 13812.972 - 13913.797: 95.5673% ( 47) 00:07:52.732 13913.797 - 14014.622: 95.8691% ( 45) 00:07:52.732 14014.622 - 14115.446: 96.1776% ( 46) 00:07:52.732 14115.446 - 14216.271: 96.4190% ( 36) 00:07:52.732 14216.271 - 14317.095: 96.6805% ( 39) 00:07:52.732 14317.095 - 14417.920: 96.8817% ( 30) 00:07:52.732 14417.920 - 14518.745: 97.0896% ( 31) 00:07:52.732 14518.745 - 14619.569: 97.2975% ( 31) 00:07:52.732 14619.569 - 14720.394: 97.4852% ( 28) 00:07:52.732 14720.394 - 14821.218: 97.6663% ( 27) 00:07:52.732 14821.218 - 14922.043: 97.8004% ( 20) 00:07:52.732 14922.043 - 15022.868: 97.9010% ( 15) 00:07:52.732 15022.868 - 15123.692: 98.0083% ( 16) 00:07:52.732 15123.692 - 15224.517: 98.1491% ( 21) 00:07:52.732 15224.517 - 15325.342: 98.2833% ( 20) 00:07:52.732 15325.342 - 15426.166: 98.3906% ( 16) 00:07:52.732 15426.166 - 15526.991: 98.4576% ( 10) 00:07:52.732 15526.991 - 15627.815: 98.5247% ( 10) 00:07:52.732 15627.815 - 15728.640: 98.5716% ( 7) 00:07:52.732 15728.640 - 15829.465: 98.6052% ( 5) 00:07:52.732 15829.465 - 15930.289: 98.6320% ( 4) 00:07:52.732 15930.289 - 16031.114: 98.6655% ( 5) 00:07:52.732 16031.114 - 16131.938: 98.6923% ( 4) 00:07:52.732 16131.938 - 16232.763: 98.7124% ( 3) 00:07:52.732 16636.062 - 16736.886: 98.7192% ( 1) 00:07:52.732 16736.886 - 16837.711: 98.7594% ( 6) 00:07:52.732 16837.711 - 16938.535: 98.7929% ( 5) 00:07:52.732 16938.535 - 17039.360: 98.8332% ( 6) 00:07:52.732 17039.360 - 17140.185: 98.9069% ( 11) 00:07:52.732 17140.185 - 17241.009: 98.9740% ( 10) 00:07:52.732 17241.009 - 17341.834: 99.0477% ( 11) 00:07:52.732 17341.834 - 17442.658: 99.1282% ( 12) 00:07:52.732 17442.658 - 17543.483: 99.1953% ( 10) 00:07:52.732 17543.483 - 17644.308: 99.2758% ( 12) 00:07:52.732 17644.308 - 17745.132: 99.3428% ( 10) 00:07:52.732 17745.132 - 17845.957: 99.4233% ( 12) 00:07:52.732 17845.957 - 17946.782: 99.4702% ( 7) 00:07:52.732 17946.782 - 18047.606: 99.5105% ( 6) 00:07:52.732 18047.606 - 18148.431: 99.5440% ( 5) 00:07:52.732 18148.431 - 18249.255: 99.5708% ( 4) 00:07:52.732 18955.028 - 19055.852: 99.5909% ( 3) 00:07:52.732 19055.852 - 19156.677: 99.6178% ( 4) 00:07:52.732 19156.677 - 19257.502: 99.6379% ( 3) 00:07:52.732 19257.502 - 19358.326: 99.6647% ( 4) 00:07:52.732 19358.326 - 19459.151: 99.6915% ( 4) 00:07:52.732 19459.151 - 19559.975: 99.7116% ( 3) 00:07:52.732 19559.975 - 19660.800: 99.7385% ( 4) 00:07:52.732 19660.800 - 19761.625: 99.7653% ( 4) 00:07:52.732 19761.625 - 19862.449: 99.7854% ( 3) 00:07:52.732 19862.449 - 19963.274: 99.8122% ( 4) 00:07:52.732 19963.274 - 20064.098: 99.8391% ( 4) 00:07:52.732 20064.098 - 20164.923: 99.8592% ( 3) 00:07:52.732 20164.923 - 20265.748: 99.8860% ( 4) 00:07:52.732 20265.748 - 20366.572: 99.9128% ( 4) 00:07:52.732 20366.572 - 20467.397: 99.9329% ( 3) 00:07:52.732 20467.397 - 20568.222: 99.9598% ( 4) 00:07:52.732 20568.222 - 20669.046: 99.9866% ( 4) 00:07:52.732 20669.046 - 20769.871: 100.0000% ( 2) 00:07:52.732 00:07:52.732 09:01:31 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:54.120 Initializing NVMe Controllers 00:07:54.120 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:54.120 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:54.120 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:54.120 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:54.120 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:54.120 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:54.120 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:54.120 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:54.120 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:54.120 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:54.120 Initialization complete. Launching workers. 00:07:54.120 ======================================================== 00:07:54.120 Latency(us) 00:07:54.120 Device Information : IOPS MiB/s Average min max 00:07:54.120 PCIE (0000:00:10.0) NSID 1 from core 0: 12628.63 147.99 10163.17 6232.22 31769.65 00:07:54.120 PCIE (0000:00:11.0) NSID 1 from core 0: 12628.63 147.99 10151.30 6605.14 30093.75 00:07:54.120 PCIE (0000:00:13.0) NSID 1 from core 0: 12628.63 147.99 10139.06 6516.43 28933.90 00:07:54.120 PCIE (0000:00:12.0) NSID 1 from core 0: 12628.63 147.99 10127.06 6481.11 27306.78 00:07:54.120 PCIE (0000:00:12.0) NSID 2 from core 0: 12628.63 147.99 10115.29 6247.92 25720.66 00:07:54.120 PCIE (0000:00:12.0) NSID 3 from core 0: 12628.63 147.99 10103.49 6371.59 24246.31 00:07:54.120 ======================================================== 00:07:54.120 Total : 75771.80 887.95 10133.23 6232.22 31769.65 00:07:54.120 00:07:54.120 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.120 ================================================================================= 00:07:54.120 1.00000% : 6805.662us 00:07:54.120 10.00000% : 7158.548us 00:07:54.120 25.00000% : 7914.732us 00:07:54.120 50.00000% : 9023.803us 00:07:54.120 75.00000% : 11342.769us 00:07:54.120 90.00000% : 15224.517us 00:07:54.120 95.00000% : 16837.711us 00:07:54.120 98.00000% : 18652.554us 00:07:54.120 99.00000% : 23189.662us 00:07:54.120 99.50000% : 29844.086us 00:07:54.120 99.90000% : 31457.280us 00:07:54.120 99.99000% : 31860.578us 00:07:54.120 99.99900% : 31860.578us 00:07:54.120 99.99990% : 31860.578us 00:07:54.120 99.99999% : 31860.578us 00:07:54.120 00:07:54.120 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.120 ================================================================================= 00:07:54.120 1.00000% : 6856.074us 00:07:54.120 10.00000% : 7208.960us 00:07:54.120 25.00000% : 7864.320us 00:07:54.120 50.00000% : 9023.803us 00:07:54.120 75.00000% : 11342.769us 00:07:54.120 90.00000% : 15325.342us 00:07:54.120 95.00000% : 16736.886us 00:07:54.120 98.00000% : 18551.729us 00:07:54.120 99.00000% : 22483.889us 00:07:54.120 99.50000% : 28432.542us 00:07:54.120 99.90000% : 29844.086us 00:07:54.120 99.99000% : 30247.385us 00:07:54.120 99.99900% : 30247.385us 00:07:54.120 99.99990% : 30247.385us 00:07:54.120 99.99999% : 30247.385us 00:07:54.120 00:07:54.120 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.120 ================================================================================= 00:07:54.121 1.00000% : 6856.074us 00:07:54.121 10.00000% : 7208.960us 00:07:54.121 25.00000% : 7864.320us 00:07:54.121 50.00000% : 8973.391us 00:07:54.121 75.00000% : 11393.182us 00:07:54.121 90.00000% : 15325.342us 00:07:54.121 95.00000% : 16736.886us 00:07:54.121 98.00000% : 18854.203us 00:07:54.121 99.00000% : 21475.643us 00:07:54.121 99.50000% : 27222.646us 00:07:54.121 99.90000% : 28634.191us 00:07:54.121 99.99000% : 29037.489us 00:07:54.121 99.99900% : 29037.489us 00:07:54.121 99.99990% : 29037.489us 00:07:54.121 99.99999% : 29037.489us 00:07:54.121 00:07:54.121 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.121 ================================================================================= 00:07:54.121 1.00000% : 6856.074us 00:07:54.121 10.00000% : 7208.960us 00:07:54.121 25.00000% : 7813.908us 00:07:54.121 50.00000% : 8922.978us 00:07:54.121 75.00000% : 11494.006us 00:07:54.121 90.00000% : 15325.342us 00:07:54.121 95.00000% : 16636.062us 00:07:54.121 98.00000% : 18753.378us 00:07:54.121 99.00000% : 20064.098us 00:07:54.121 99.50000% : 24802.855us 00:07:54.121 99.90000% : 27020.997us 00:07:54.121 99.99000% : 27424.295us 00:07:54.121 99.99900% : 27424.295us 00:07:54.121 99.99990% : 27424.295us 00:07:54.121 99.99999% : 27424.295us 00:07:54.121 00:07:54.121 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.121 ================================================================================= 00:07:54.121 1.00000% : 6856.074us 00:07:54.121 10.00000% : 7208.960us 00:07:54.121 25.00000% : 7864.320us 00:07:54.121 50.00000% : 8973.391us 00:07:54.121 75.00000% : 11494.006us 00:07:54.121 90.00000% : 15325.342us 00:07:54.121 95.00000% : 16636.062us 00:07:54.121 98.00000% : 18350.080us 00:07:54.121 99.00000% : 19559.975us 00:07:54.121 99.50000% : 23290.486us 00:07:54.121 99.90000% : 25407.803us 00:07:54.121 99.99000% : 25710.277us 00:07:54.121 99.99900% : 25811.102us 00:07:54.121 99.99990% : 25811.102us 00:07:54.121 99.99999% : 25811.102us 00:07:54.121 00:07:54.121 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.121 ================================================================================= 00:07:54.121 1.00000% : 6856.074us 00:07:54.121 10.00000% : 7208.960us 00:07:54.121 25.00000% : 7914.732us 00:07:54.121 50.00000% : 9023.803us 00:07:54.121 75.00000% : 11393.182us 00:07:54.121 90.00000% : 15325.342us 00:07:54.121 95.00000% : 16636.062us 00:07:54.121 98.00000% : 18047.606us 00:07:54.121 99.00000% : 19358.326us 00:07:54.121 99.50000% : 21576.468us 00:07:54.121 99.90000% : 23996.258us 00:07:54.121 99.99000% : 24298.732us 00:07:54.121 99.99900% : 24298.732us 00:07:54.121 99.99990% : 24298.732us 00:07:54.121 99.99999% : 24298.732us 00:07:54.121 00:07:54.121 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:54.121 ============================================================================== 00:07:54.121 Range in us Cumulative IO count 00:07:54.121 6225.920 - 6251.126: 0.0079% ( 1) 00:07:54.121 6251.126 - 6276.332: 0.0237% ( 2) 00:07:54.121 6276.332 - 6301.538: 0.0316% ( 1) 00:07:54.121 6301.538 - 6326.745: 0.0473% ( 2) 00:07:54.121 6326.745 - 6351.951: 0.0631% ( 2) 00:07:54.121 6351.951 - 6377.157: 0.0710% ( 1) 00:07:54.121 6402.363 - 6427.569: 0.0868% ( 2) 00:07:54.121 6427.569 - 6452.775: 0.1263% ( 5) 00:07:54.121 6452.775 - 6503.188: 0.1736% ( 6) 00:07:54.121 6503.188 - 6553.600: 0.2289% ( 7) 00:07:54.121 6553.600 - 6604.012: 0.2920% ( 8) 00:07:54.121 6604.012 - 6654.425: 0.4498% ( 20) 00:07:54.121 6654.425 - 6704.837: 0.7339% ( 36) 00:07:54.121 6704.837 - 6755.249: 0.9943% ( 33) 00:07:54.121 6755.249 - 6805.662: 1.4047% ( 52) 00:07:54.121 6805.662 - 6856.074: 2.1465% ( 94) 00:07:54.121 6856.074 - 6906.486: 3.0303% ( 112) 00:07:54.121 6906.486 - 6956.898: 4.1509% ( 142) 00:07:54.121 6956.898 - 7007.311: 5.3583% ( 153) 00:07:54.121 7007.311 - 7057.723: 6.9050% ( 196) 00:07:54.121 7057.723 - 7108.135: 8.7042% ( 228) 00:07:54.121 7108.135 - 7158.548: 10.4482% ( 221) 00:07:54.121 7158.548 - 7208.960: 12.1054% ( 210) 00:07:54.121 7208.960 - 7259.372: 13.4470% ( 170) 00:07:54.121 7259.372 - 7309.785: 14.6859% ( 157) 00:07:54.121 7309.785 - 7360.197: 15.9880% ( 165) 00:07:54.121 7360.197 - 7410.609: 17.0533% ( 135) 00:07:54.121 7410.609 - 7461.022: 18.1108% ( 134) 00:07:54.121 7461.022 - 7511.434: 18.9710% ( 109) 00:07:54.121 7511.434 - 7561.846: 19.8232% ( 108) 00:07:54.121 7561.846 - 7612.258: 20.6755% ( 108) 00:07:54.121 7612.258 - 7662.671: 21.5357% ( 109) 00:07:54.121 7662.671 - 7713.083: 22.4037% ( 110) 00:07:54.121 7713.083 - 7763.495: 22.9877% ( 74) 00:07:54.121 7763.495 - 7813.908: 23.7610% ( 98) 00:07:54.121 7813.908 - 7864.320: 24.5186% ( 96) 00:07:54.121 7864.320 - 7914.732: 25.1736% ( 83) 00:07:54.121 7914.732 - 7965.145: 26.0180% ( 107) 00:07:54.121 7965.145 - 8015.557: 26.8703% ( 108) 00:07:54.121 8015.557 - 8065.969: 28.0224% ( 146) 00:07:54.121 8065.969 - 8116.382: 29.4902% ( 186) 00:07:54.121 8116.382 - 8166.794: 31.0054% ( 192) 00:07:54.121 8166.794 - 8217.206: 32.4811% ( 187) 00:07:54.121 8217.206 - 8267.618: 34.0593% ( 200) 00:07:54.121 8267.618 - 8318.031: 35.6850% ( 206) 00:07:54.121 8318.031 - 8368.443: 36.8924% ( 153) 00:07:54.121 8368.443 - 8418.855: 37.7683% ( 111) 00:07:54.121 8418.855 - 8469.268: 38.7942% ( 130) 00:07:54.121 8469.268 - 8519.680: 39.8595% ( 135) 00:07:54.121 8519.680 - 8570.092: 40.9801% ( 142) 00:07:54.121 8570.092 - 8620.505: 41.9744% ( 126) 00:07:54.121 8620.505 - 8670.917: 42.8898% ( 116) 00:07:54.121 8670.917 - 8721.329: 43.9236% ( 131) 00:07:54.121 8721.329 - 8771.742: 44.9890% ( 135) 00:07:54.121 8771.742 - 8822.154: 46.0148% ( 130) 00:07:54.121 8822.154 - 8872.566: 47.2301% ( 154) 00:07:54.121 8872.566 - 8922.978: 48.3033% ( 136) 00:07:54.121 8922.978 - 8973.391: 49.5739% ( 161) 00:07:54.121 8973.391 - 9023.803: 51.0969% ( 193) 00:07:54.121 9023.803 - 9074.215: 52.7225% ( 206) 00:07:54.121 9074.215 - 9124.628: 54.2377% ( 192) 00:07:54.121 9124.628 - 9175.040: 55.6029% ( 173) 00:07:54.121 9175.040 - 9225.452: 56.9050% ( 165) 00:07:54.121 9225.452 - 9275.865: 57.9151% ( 128) 00:07:54.121 9275.865 - 9326.277: 58.9015% ( 125) 00:07:54.121 9326.277 - 9376.689: 59.7775% ( 111) 00:07:54.121 9376.689 - 9427.102: 60.7718% ( 126) 00:07:54.121 9427.102 - 9477.514: 61.6872% ( 116) 00:07:54.121 9477.514 - 9527.926: 62.6105% ( 117) 00:07:54.121 9527.926 - 9578.338: 63.5022% ( 113) 00:07:54.121 9578.338 - 9628.751: 64.2440% ( 94) 00:07:54.121 9628.751 - 9679.163: 64.8911% ( 82) 00:07:54.121 9679.163 - 9729.575: 65.5777% ( 87) 00:07:54.121 9729.575 - 9779.988: 66.2405% ( 84) 00:07:54.121 9779.988 - 9830.400: 66.8482% ( 77) 00:07:54.121 9830.400 - 9880.812: 67.3690% ( 66) 00:07:54.121 9880.812 - 9931.225: 67.7083% ( 43) 00:07:54.121 9931.225 - 9981.637: 68.0713% ( 46) 00:07:54.121 9981.637 - 10032.049: 68.4107% ( 43) 00:07:54.121 10032.049 - 10082.462: 68.7342% ( 41) 00:07:54.121 10082.462 - 10132.874: 68.9710% ( 30) 00:07:54.121 10132.874 - 10183.286: 69.2945% ( 41) 00:07:54.121 10183.286 - 10233.698: 69.5944% ( 38) 00:07:54.121 10233.698 - 10284.111: 69.8311% ( 30) 00:07:54.121 10284.111 - 10334.523: 70.1152% ( 36) 00:07:54.121 10334.523 - 10384.935: 70.4230% ( 39) 00:07:54.121 10384.935 - 10435.348: 70.7465% ( 41) 00:07:54.121 10435.348 - 10485.760: 71.1806% ( 55) 00:07:54.121 10485.760 - 10536.172: 71.5593% ( 48) 00:07:54.121 10536.172 - 10586.585: 71.9066% ( 44) 00:07:54.121 10586.585 - 10636.997: 72.2222% ( 40) 00:07:54.121 10636.997 - 10687.409: 72.4590% ( 30) 00:07:54.121 10687.409 - 10737.822: 72.6405% ( 23) 00:07:54.121 10737.822 - 10788.234: 72.7746% ( 17) 00:07:54.121 10788.234 - 10838.646: 72.9088% ( 17) 00:07:54.121 10838.646 - 10889.058: 73.1218% ( 27) 00:07:54.121 10889.058 - 10939.471: 73.3507% ( 29) 00:07:54.121 10939.471 - 10989.883: 73.5717% ( 28) 00:07:54.121 10989.883 - 11040.295: 73.7374% ( 21) 00:07:54.121 11040.295 - 11090.708: 74.0609% ( 41) 00:07:54.121 11090.708 - 11141.120: 74.2819% ( 28) 00:07:54.121 11141.120 - 11191.532: 74.5028% ( 28) 00:07:54.121 11191.532 - 11241.945: 74.7238% ( 28) 00:07:54.122 11241.945 - 11292.357: 74.9684% ( 31) 00:07:54.122 11292.357 - 11342.769: 75.1184% ( 19) 00:07:54.122 11342.769 - 11393.182: 75.3235% ( 26) 00:07:54.122 11393.182 - 11443.594: 75.4893% ( 21) 00:07:54.122 11443.594 - 11494.006: 75.5997% ( 14) 00:07:54.122 11494.006 - 11544.418: 75.7497% ( 19) 00:07:54.122 11544.418 - 11594.831: 75.9075% ( 20) 00:07:54.122 11594.831 - 11645.243: 76.0969% ( 24) 00:07:54.122 11645.243 - 11695.655: 76.2705% ( 22) 00:07:54.122 11695.655 - 11746.068: 76.5152% ( 31) 00:07:54.122 11746.068 - 11796.480: 76.7598% ( 31) 00:07:54.122 11796.480 - 11846.892: 76.9807% ( 28) 00:07:54.122 11846.892 - 11897.305: 77.0281% ( 6) 00:07:54.122 11897.305 - 11947.717: 77.0991% ( 9) 00:07:54.122 11947.717 - 11998.129: 77.2175% ( 15) 00:07:54.122 11998.129 - 12048.542: 77.3201% ( 13) 00:07:54.122 12048.542 - 12098.954: 77.4384% ( 15) 00:07:54.122 12098.954 - 12149.366: 77.5726% ( 17) 00:07:54.122 12149.366 - 12199.778: 77.7225% ( 19) 00:07:54.122 12199.778 - 12250.191: 77.8567% ( 17) 00:07:54.122 12250.191 - 12300.603: 77.9356% ( 10) 00:07:54.122 12300.603 - 12351.015: 78.0461% ( 14) 00:07:54.122 12351.015 - 12401.428: 78.1881% ( 18) 00:07:54.122 12401.428 - 12451.840: 78.3144% ( 16) 00:07:54.122 12451.840 - 12502.252: 78.4328% ( 15) 00:07:54.122 12502.252 - 12552.665: 78.5906% ( 20) 00:07:54.122 12552.665 - 12603.077: 78.7011% ( 14) 00:07:54.122 12603.077 - 12653.489: 78.8116% ( 14) 00:07:54.122 12653.489 - 12703.902: 78.9378% ( 16) 00:07:54.122 12703.902 - 12754.314: 79.0404% ( 13) 00:07:54.122 12754.314 - 12804.726: 79.2456% ( 26) 00:07:54.122 12804.726 - 12855.138: 79.3797% ( 17) 00:07:54.122 12855.138 - 12905.551: 79.5060% ( 16) 00:07:54.122 12905.551 - 13006.375: 79.8138% ( 39) 00:07:54.122 13006.375 - 13107.200: 80.2715% ( 58) 00:07:54.122 13107.200 - 13208.025: 80.5792% ( 39) 00:07:54.122 13208.025 - 13308.849: 81.0527% ( 60) 00:07:54.122 13308.849 - 13409.674: 81.5893% ( 68) 00:07:54.122 13409.674 - 13510.498: 82.0549% ( 59) 00:07:54.122 13510.498 - 13611.323: 82.5047% ( 57) 00:07:54.122 13611.323 - 13712.148: 82.8993% ( 50) 00:07:54.122 13712.148 - 13812.972: 83.4044% ( 64) 00:07:54.122 13812.972 - 13913.797: 83.9252% ( 66) 00:07:54.122 13913.797 - 14014.622: 84.3829% ( 58) 00:07:54.122 14014.622 - 14115.446: 84.9511% ( 72) 00:07:54.122 14115.446 - 14216.271: 85.4877% ( 68) 00:07:54.122 14216.271 - 14317.095: 86.1269% ( 81) 00:07:54.122 14317.095 - 14417.920: 86.6635% ( 68) 00:07:54.122 14417.920 - 14518.745: 87.1054% ( 56) 00:07:54.122 14518.745 - 14619.569: 87.5631% ( 58) 00:07:54.122 14619.569 - 14720.394: 88.1392% ( 73) 00:07:54.122 14720.394 - 14821.218: 88.4864% ( 44) 00:07:54.122 14821.218 - 14922.043: 88.9283% ( 56) 00:07:54.122 14922.043 - 15022.868: 89.3308% ( 51) 00:07:54.122 15022.868 - 15123.692: 89.7254% ( 50) 00:07:54.122 15123.692 - 15224.517: 90.1121% ( 49) 00:07:54.122 15224.517 - 15325.342: 90.5145% ( 51) 00:07:54.122 15325.342 - 15426.166: 90.9012% ( 49) 00:07:54.122 15426.166 - 15526.991: 91.2090% ( 39) 00:07:54.122 15526.991 - 15627.815: 91.5483% ( 43) 00:07:54.122 15627.815 - 15728.640: 91.9192% ( 47) 00:07:54.122 15728.640 - 15829.465: 92.2585% ( 43) 00:07:54.122 15829.465 - 15930.289: 92.6136% ( 45) 00:07:54.122 15930.289 - 16031.114: 92.8977% ( 36) 00:07:54.122 16031.114 - 16131.938: 93.1345% ( 30) 00:07:54.122 16131.938 - 16232.763: 93.3633% ( 29) 00:07:54.122 16232.763 - 16333.588: 93.6237% ( 33) 00:07:54.122 16333.588 - 16434.412: 93.9394% ( 40) 00:07:54.122 16434.412 - 16535.237: 94.2472% ( 39) 00:07:54.122 16535.237 - 16636.062: 94.6181% ( 47) 00:07:54.122 16636.062 - 16736.886: 94.9811% ( 46) 00:07:54.122 16736.886 - 16837.711: 95.2415% ( 33) 00:07:54.122 16837.711 - 16938.535: 95.4467% ( 26) 00:07:54.122 16938.535 - 17039.360: 95.6676% ( 28) 00:07:54.122 17039.360 - 17140.185: 95.8333% ( 21) 00:07:54.122 17140.185 - 17241.009: 96.0464% ( 27) 00:07:54.122 17241.009 - 17341.834: 96.2516% ( 26) 00:07:54.122 17341.834 - 17442.658: 96.4568% ( 26) 00:07:54.122 17442.658 - 17543.483: 96.6304% ( 22) 00:07:54.122 17543.483 - 17644.308: 96.8434% ( 27) 00:07:54.122 17644.308 - 17745.132: 97.0328% ( 24) 00:07:54.122 17745.132 - 17845.957: 97.1591% ( 16) 00:07:54.122 17845.957 - 17946.782: 97.3169% ( 20) 00:07:54.122 17946.782 - 18047.606: 97.4195% ( 13) 00:07:54.122 18047.606 - 18148.431: 97.5300% ( 14) 00:07:54.122 18148.431 - 18249.255: 97.6405% ( 14) 00:07:54.122 18249.255 - 18350.080: 97.7825% ( 18) 00:07:54.122 18350.080 - 18450.905: 97.9009% ( 15) 00:07:54.122 18450.905 - 18551.729: 97.9561% ( 7) 00:07:54.122 18551.729 - 18652.554: 98.0508% ( 12) 00:07:54.122 18652.554 - 18753.378: 98.1534% ( 13) 00:07:54.122 18753.378 - 18854.203: 98.2244% ( 9) 00:07:54.122 18854.203 - 18955.028: 98.2876% ( 8) 00:07:54.122 18955.028 - 19055.852: 98.3112% ( 3) 00:07:54.122 19055.852 - 19156.677: 98.3507% ( 5) 00:07:54.122 19156.677 - 19257.502: 98.4217% ( 9) 00:07:54.122 19257.502 - 19358.326: 98.5085% ( 11) 00:07:54.122 19358.326 - 19459.151: 98.5874% ( 10) 00:07:54.122 19459.151 - 19559.975: 98.6506% ( 8) 00:07:54.122 19559.975 - 19660.800: 98.6979% ( 6) 00:07:54.122 19660.800 - 19761.625: 98.7058% ( 1) 00:07:54.122 19761.625 - 19862.449: 98.7137% ( 1) 00:07:54.122 19862.449 - 19963.274: 98.7453% ( 4) 00:07:54.122 19963.274 - 20064.098: 98.7689% ( 3) 00:07:54.122 20064.098 - 20164.923: 98.8163% ( 6) 00:07:54.122 20164.923 - 20265.748: 98.8242% ( 1) 00:07:54.122 20265.748 - 20366.572: 98.8636% ( 5) 00:07:54.122 20366.572 - 20467.397: 98.8873% ( 3) 00:07:54.122 20467.397 - 20568.222: 98.9189% ( 4) 00:07:54.122 20568.222 - 20669.046: 98.9504% ( 4) 00:07:54.122 20669.046 - 20769.871: 98.9820% ( 4) 00:07:54.122 20769.871 - 20870.695: 98.9899% ( 1) 00:07:54.122 22988.012 - 23088.837: 98.9978% ( 1) 00:07:54.122 23088.837 - 23189.662: 99.0215% ( 3) 00:07:54.122 23189.662 - 23290.486: 99.0451% ( 3) 00:07:54.122 23290.486 - 23391.311: 99.0767% ( 4) 00:07:54.122 23391.311 - 23492.135: 99.1083% ( 4) 00:07:54.122 23492.135 - 23592.960: 99.1319% ( 3) 00:07:54.122 23592.960 - 23693.785: 99.1714% ( 5) 00:07:54.122 23693.785 - 23794.609: 99.1951% ( 3) 00:07:54.122 23794.609 - 23895.434: 99.2266% ( 4) 00:07:54.122 23895.434 - 23996.258: 99.2503% ( 3) 00:07:54.122 23996.258 - 24097.083: 99.2819% ( 4) 00:07:54.122 24097.083 - 24197.908: 99.3134% ( 4) 00:07:54.122 24197.908 - 24298.732: 99.3371% ( 3) 00:07:54.122 24298.732 - 24399.557: 99.3687% ( 4) 00:07:54.122 24399.557 - 24500.382: 99.3924% ( 3) 00:07:54.122 24500.382 - 24601.206: 99.4239% ( 4) 00:07:54.122 24601.206 - 24702.031: 99.4476% ( 3) 00:07:54.122 24702.031 - 24802.855: 99.4792% ( 4) 00:07:54.122 24802.855 - 24903.680: 99.4949% ( 2) 00:07:54.122 29642.437 - 29844.086: 99.5107% ( 2) 00:07:54.122 29844.086 - 30045.735: 99.5660% ( 7) 00:07:54.122 30045.735 - 30247.385: 99.6212% ( 7) 00:07:54.122 30247.385 - 30449.034: 99.6686% ( 6) 00:07:54.122 30449.034 - 30650.683: 99.7159% ( 6) 00:07:54.122 30650.683 - 30852.332: 99.7711% ( 7) 00:07:54.122 30852.332 - 31053.982: 99.8185% ( 6) 00:07:54.122 31053.982 - 31255.631: 99.8737% ( 7) 00:07:54.122 31255.631 - 31457.280: 99.9211% ( 6) 00:07:54.122 31457.280 - 31658.929: 99.9842% ( 8) 00:07:54.122 31658.929 - 31860.578: 100.0000% ( 2) 00:07:54.122 00:07:54.122 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:54.122 ============================================================================== 00:07:54.122 Range in us Cumulative IO count 00:07:54.122 6604.012 - 6654.425: 0.0473% ( 6) 00:07:54.122 6654.425 - 6704.837: 0.0947% ( 6) 00:07:54.122 6704.837 - 6755.249: 0.2762% ( 23) 00:07:54.122 6755.249 - 6805.662: 0.7734% ( 63) 00:07:54.122 6805.662 - 6856.074: 1.1837% ( 52) 00:07:54.122 6856.074 - 6906.486: 1.8466% ( 84) 00:07:54.122 6906.486 - 6956.898: 2.8567% ( 128) 00:07:54.122 6956.898 - 7007.311: 3.9773% ( 142) 00:07:54.122 7007.311 - 7057.723: 5.4609% ( 188) 00:07:54.122 7057.723 - 7108.135: 7.0470% ( 201) 00:07:54.122 7108.135 - 7158.548: 8.6884% ( 208) 00:07:54.122 7158.548 - 7208.960: 10.7481% ( 261) 00:07:54.122 7208.960 - 7259.372: 12.5552% ( 229) 00:07:54.122 7259.372 - 7309.785: 14.0941% ( 195) 00:07:54.122 7309.785 - 7360.197: 16.1458% ( 260) 00:07:54.122 7360.197 - 7410.609: 17.8977% ( 222) 00:07:54.122 7410.609 - 7461.022: 19.4839% ( 201) 00:07:54.122 7461.022 - 7511.434: 20.6518% ( 148) 00:07:54.122 7511.434 - 7561.846: 21.3936% ( 94) 00:07:54.122 7561.846 - 7612.258: 22.2775% ( 112) 00:07:54.122 7612.258 - 7662.671: 22.8456% ( 72) 00:07:54.122 7662.671 - 7713.083: 23.5322% ( 87) 00:07:54.122 7713.083 - 7763.495: 23.9426% ( 52) 00:07:54.122 7763.495 - 7813.908: 24.4949% ( 70) 00:07:54.122 7813.908 - 7864.320: 25.2683% ( 98) 00:07:54.122 7864.320 - 7914.732: 25.6471% ( 48) 00:07:54.122 7914.732 - 7965.145: 26.0022% ( 45) 00:07:54.122 7965.145 - 8015.557: 26.3336% ( 42) 00:07:54.122 8015.557 - 8065.969: 26.8545% ( 66) 00:07:54.122 8065.969 - 8116.382: 27.6199% ( 97) 00:07:54.122 8116.382 - 8166.794: 28.5117% ( 113) 00:07:54.122 8166.794 - 8217.206: 29.9242% ( 179) 00:07:54.122 8217.206 - 8267.618: 31.3289% ( 178) 00:07:54.122 8267.618 - 8318.031: 33.0729% ( 221) 00:07:54.122 8318.031 - 8368.443: 34.5328% ( 185) 00:07:54.122 8368.443 - 8418.855: 36.8134% ( 289) 00:07:54.122 8418.855 - 8469.268: 39.0309% ( 281) 00:07:54.122 8469.268 - 8519.680: 40.7118% ( 213) 00:07:54.122 8519.680 - 8570.092: 42.0612% ( 171) 00:07:54.123 8570.092 - 8620.505: 43.3870% ( 168) 00:07:54.123 8620.505 - 8670.917: 44.2866% ( 114) 00:07:54.123 8670.917 - 8721.329: 45.4309% ( 145) 00:07:54.123 8721.329 - 8771.742: 46.0385% ( 77) 00:07:54.123 8771.742 - 8822.154: 46.6777% ( 81) 00:07:54.123 8822.154 - 8872.566: 47.2932% ( 78) 00:07:54.123 8872.566 - 8922.978: 48.2008% ( 115) 00:07:54.123 8922.978 - 8973.391: 49.1635% ( 122) 00:07:54.123 8973.391 - 9023.803: 50.3551% ( 151) 00:07:54.123 9023.803 - 9074.215: 51.8466% ( 189) 00:07:54.123 9074.215 - 9124.628: 53.6774% ( 232) 00:07:54.123 9124.628 - 9175.040: 55.3346% ( 210) 00:07:54.123 9175.040 - 9225.452: 56.8340% ( 190) 00:07:54.123 9225.452 - 9275.865: 58.3176% ( 188) 00:07:54.123 9275.865 - 9326.277: 59.6433% ( 168) 00:07:54.123 9326.277 - 9376.689: 60.6692% ( 130) 00:07:54.123 9376.689 - 9427.102: 61.7030% ( 131) 00:07:54.123 9427.102 - 9477.514: 62.6973% ( 126) 00:07:54.123 9477.514 - 9527.926: 63.5259% ( 105) 00:07:54.123 9527.926 - 9578.338: 64.3624% ( 106) 00:07:54.123 9578.338 - 9628.751: 65.3330% ( 123) 00:07:54.123 9628.751 - 9679.163: 66.0196% ( 87) 00:07:54.123 9679.163 - 9729.575: 66.6430% ( 79) 00:07:54.123 9729.575 - 9779.988: 67.3532% ( 90) 00:07:54.123 9779.988 - 9830.400: 67.8425% ( 62) 00:07:54.123 9830.400 - 9880.812: 68.3396% ( 63) 00:07:54.123 9880.812 - 9931.225: 68.6158% ( 35) 00:07:54.123 9931.225 - 9981.637: 68.8920% ( 35) 00:07:54.123 9981.637 - 10032.049: 69.2235% ( 42) 00:07:54.123 10032.049 - 10082.462: 69.4444% ( 28) 00:07:54.123 10082.462 - 10132.874: 69.6733% ( 29) 00:07:54.123 10132.874 - 10183.286: 69.8785% ( 26) 00:07:54.123 10183.286 - 10233.698: 70.1152% ( 30) 00:07:54.123 10233.698 - 10284.111: 70.3125% ( 25) 00:07:54.123 10284.111 - 10334.523: 70.5729% ( 33) 00:07:54.123 10334.523 - 10384.935: 70.9438% ( 47) 00:07:54.123 10384.935 - 10435.348: 71.2989% ( 45) 00:07:54.123 10435.348 - 10485.760: 71.6146% ( 40) 00:07:54.123 10485.760 - 10536.172: 71.9934% ( 48) 00:07:54.123 10536.172 - 10586.585: 72.3169% ( 41) 00:07:54.123 10586.585 - 10636.997: 72.6878% ( 47) 00:07:54.123 10636.997 - 10687.409: 72.9167% ( 29) 00:07:54.123 10687.409 - 10737.822: 73.1771% ( 33) 00:07:54.123 10737.822 - 10788.234: 73.4296% ( 32) 00:07:54.123 10788.234 - 10838.646: 73.6821% ( 32) 00:07:54.123 10838.646 - 10889.058: 73.9110% ( 29) 00:07:54.123 10889.058 - 10939.471: 74.1004% ( 24) 00:07:54.123 10939.471 - 10989.883: 74.2740% ( 22) 00:07:54.123 10989.883 - 11040.295: 74.4239% ( 19) 00:07:54.123 11040.295 - 11090.708: 74.5660% ( 18) 00:07:54.123 11090.708 - 11141.120: 74.6765% ( 14) 00:07:54.123 11141.120 - 11191.532: 74.7869% ( 14) 00:07:54.123 11191.532 - 11241.945: 74.8816% ( 12) 00:07:54.123 11241.945 - 11292.357: 74.9842% ( 13) 00:07:54.123 11292.357 - 11342.769: 75.0947% ( 14) 00:07:54.123 11342.769 - 11393.182: 75.1894% ( 12) 00:07:54.123 11393.182 - 11443.594: 75.2999% ( 14) 00:07:54.123 11443.594 - 11494.006: 75.4025% ( 13) 00:07:54.123 11494.006 - 11544.418: 75.5129% ( 14) 00:07:54.123 11544.418 - 11594.831: 75.6313% ( 15) 00:07:54.123 11594.831 - 11645.243: 75.7655% ( 17) 00:07:54.123 11645.243 - 11695.655: 75.9785% ( 27) 00:07:54.123 11695.655 - 11746.068: 76.2232% ( 31) 00:07:54.123 11746.068 - 11796.480: 76.3336% ( 14) 00:07:54.123 11796.480 - 11846.892: 76.4520% ( 15) 00:07:54.123 11846.892 - 11897.305: 76.5783% ( 16) 00:07:54.123 11897.305 - 11947.717: 76.6651% ( 11) 00:07:54.123 11947.717 - 11998.129: 76.7914% ( 16) 00:07:54.123 11998.129 - 12048.542: 76.9018% ( 14) 00:07:54.123 12048.542 - 12098.954: 76.9965% ( 12) 00:07:54.123 12098.954 - 12149.366: 77.1070% ( 14) 00:07:54.123 12149.366 - 12199.778: 77.2333% ( 16) 00:07:54.123 12199.778 - 12250.191: 77.3674% ( 17) 00:07:54.123 12250.191 - 12300.603: 77.5331% ( 21) 00:07:54.123 12300.603 - 12351.015: 77.7304% ( 25) 00:07:54.123 12351.015 - 12401.428: 77.9040% ( 22) 00:07:54.123 12401.428 - 12451.840: 78.0698% ( 21) 00:07:54.123 12451.840 - 12502.252: 78.2434% ( 22) 00:07:54.123 12502.252 - 12552.665: 78.3933% ( 19) 00:07:54.123 12552.665 - 12603.077: 78.5985% ( 26) 00:07:54.123 12603.077 - 12653.489: 78.8747% ( 35) 00:07:54.123 12653.489 - 12703.902: 79.0325% ( 20) 00:07:54.123 12703.902 - 12754.314: 79.1667% ( 17) 00:07:54.123 12754.314 - 12804.726: 79.2614% ( 12) 00:07:54.123 12804.726 - 12855.138: 79.3640% ( 13) 00:07:54.123 12855.138 - 12905.551: 79.5139% ( 19) 00:07:54.123 12905.551 - 13006.375: 79.9164% ( 51) 00:07:54.123 13006.375 - 13107.200: 80.3662% ( 57) 00:07:54.123 13107.200 - 13208.025: 80.9422% ( 73) 00:07:54.123 13208.025 - 13308.849: 81.4867% ( 69) 00:07:54.123 13308.849 - 13409.674: 81.8103% ( 41) 00:07:54.123 13409.674 - 13510.498: 82.2680% ( 58) 00:07:54.123 13510.498 - 13611.323: 82.6783% ( 52) 00:07:54.123 13611.323 - 13712.148: 83.0808% ( 51) 00:07:54.123 13712.148 - 13812.972: 83.4517% ( 47) 00:07:54.123 13812.972 - 13913.797: 83.9489% ( 63) 00:07:54.123 13913.797 - 14014.622: 84.5565% ( 77) 00:07:54.123 14014.622 - 14115.446: 85.2194% ( 84) 00:07:54.123 14115.446 - 14216.271: 85.7560% ( 68) 00:07:54.123 14216.271 - 14317.095: 86.1348% ( 48) 00:07:54.123 14317.095 - 14417.920: 86.4504% ( 40) 00:07:54.123 14417.920 - 14518.745: 86.7582% ( 39) 00:07:54.123 14518.745 - 14619.569: 87.1607% ( 51) 00:07:54.123 14619.569 - 14720.394: 87.5237% ( 46) 00:07:54.123 14720.394 - 14821.218: 87.9577% ( 55) 00:07:54.123 14821.218 - 14922.043: 88.3917% ( 55) 00:07:54.123 14922.043 - 15022.868: 88.9441% ( 70) 00:07:54.123 15022.868 - 15123.692: 89.4650% ( 66) 00:07:54.123 15123.692 - 15224.517: 89.9542% ( 62) 00:07:54.123 15224.517 - 15325.342: 90.3567% ( 51) 00:07:54.123 15325.342 - 15426.166: 90.7670% ( 52) 00:07:54.123 15426.166 - 15526.991: 91.0985% ( 42) 00:07:54.123 15526.991 - 15627.815: 91.4062% ( 39) 00:07:54.123 15627.815 - 15728.640: 91.7140% ( 39) 00:07:54.123 15728.640 - 15829.465: 92.0849% ( 47) 00:07:54.123 15829.465 - 15930.289: 92.4637% ( 48) 00:07:54.123 15930.289 - 16031.114: 92.7478% ( 36) 00:07:54.123 16031.114 - 16131.938: 93.0634% ( 40) 00:07:54.123 16131.938 - 16232.763: 93.3002% ( 30) 00:07:54.123 16232.763 - 16333.588: 93.6316% ( 42) 00:07:54.123 16333.588 - 16434.412: 94.0104% ( 48) 00:07:54.123 16434.412 - 16535.237: 94.3892% ( 48) 00:07:54.123 16535.237 - 16636.062: 94.7049% ( 40) 00:07:54.123 16636.062 - 16736.886: 95.0126% ( 39) 00:07:54.123 16736.886 - 16837.711: 95.3362% ( 41) 00:07:54.123 16837.711 - 16938.535: 95.6518% ( 40) 00:07:54.123 16938.535 - 17039.360: 95.8886% ( 30) 00:07:54.123 17039.360 - 17140.185: 96.1253% ( 30) 00:07:54.123 17140.185 - 17241.009: 96.3068% ( 23) 00:07:54.123 17241.009 - 17341.834: 96.4725% ( 21) 00:07:54.123 17341.834 - 17442.658: 96.6146% ( 18) 00:07:54.123 17442.658 - 17543.483: 96.7172% ( 13) 00:07:54.123 17543.483 - 17644.308: 96.8908% ( 22) 00:07:54.123 17644.308 - 17745.132: 96.9776% ( 11) 00:07:54.123 17745.132 - 17845.957: 97.0881% ( 14) 00:07:54.123 17845.957 - 17946.782: 97.2143% ( 16) 00:07:54.123 17946.782 - 18047.606: 97.3406% ( 16) 00:07:54.123 18047.606 - 18148.431: 97.4590% ( 15) 00:07:54.123 18148.431 - 18249.255: 97.6247% ( 21) 00:07:54.123 18249.255 - 18350.080: 97.7667% ( 18) 00:07:54.123 18350.080 - 18450.905: 97.9167% ( 19) 00:07:54.123 18450.905 - 18551.729: 98.0193% ( 13) 00:07:54.123 18551.729 - 18652.554: 98.1140% ( 12) 00:07:54.123 18652.554 - 18753.378: 98.1771% ( 8) 00:07:54.123 18753.378 - 18854.203: 98.2639% ( 11) 00:07:54.123 18854.203 - 18955.028: 98.3586% ( 12) 00:07:54.123 18955.028 - 19055.852: 98.4454% ( 11) 00:07:54.123 19055.852 - 19156.677: 98.4848% ( 5) 00:07:54.123 19257.502 - 19358.326: 98.5085% ( 3) 00:07:54.123 19358.326 - 19459.151: 98.5480% ( 5) 00:07:54.123 19459.151 - 19559.975: 98.5874% ( 5) 00:07:54.123 19559.975 - 19660.800: 98.6348% ( 6) 00:07:54.123 19660.800 - 19761.625: 98.6664% ( 4) 00:07:54.123 19761.625 - 19862.449: 98.6979% ( 4) 00:07:54.123 19862.449 - 19963.274: 98.7374% ( 5) 00:07:54.123 19963.274 - 20064.098: 98.7768% ( 5) 00:07:54.123 20064.098 - 20164.923: 98.8005% ( 3) 00:07:54.123 20164.923 - 20265.748: 98.8321% ( 4) 00:07:54.123 20265.748 - 20366.572: 98.8557% ( 3) 00:07:54.123 20366.572 - 20467.397: 98.8794% ( 3) 00:07:54.123 20467.397 - 20568.222: 98.9189% ( 5) 00:07:54.123 20568.222 - 20669.046: 98.9504% ( 4) 00:07:54.123 20669.046 - 20769.871: 98.9820% ( 4) 00:07:54.123 20769.871 - 20870.695: 98.9899% ( 1) 00:07:54.123 22282.240 - 22383.065: 98.9978% ( 1) 00:07:54.123 22383.065 - 22483.889: 99.0215% ( 3) 00:07:54.123 22483.889 - 22584.714: 99.0372% ( 2) 00:07:54.123 22584.714 - 22685.538: 99.0609% ( 3) 00:07:54.123 22685.538 - 22786.363: 99.0767% ( 2) 00:07:54.123 22786.363 - 22887.188: 99.0925% ( 2) 00:07:54.123 22887.188 - 22988.012: 99.1162% ( 3) 00:07:54.123 22988.012 - 23088.837: 99.1398% ( 3) 00:07:54.123 23088.837 - 23189.662: 99.1556% ( 2) 00:07:54.123 23189.662 - 23290.486: 99.1793% ( 3) 00:07:54.123 23290.486 - 23391.311: 99.2030% ( 3) 00:07:54.123 23391.311 - 23492.135: 99.2345% ( 4) 00:07:54.123 23492.135 - 23592.960: 99.2503% ( 2) 00:07:54.123 23592.960 - 23693.785: 99.2740% ( 3) 00:07:54.123 23693.785 - 23794.609: 99.2977% ( 3) 00:07:54.123 23794.609 - 23895.434: 99.3213% ( 3) 00:07:54.123 23895.434 - 23996.258: 99.3450% ( 3) 00:07:54.123 23996.258 - 24097.083: 99.3766% ( 4) 00:07:54.123 24097.083 - 24197.908: 99.4081% ( 4) 00:07:54.123 24197.908 - 24298.732: 99.4318% ( 3) 00:07:54.123 24298.732 - 24399.557: 99.4634% ( 4) 00:07:54.124 24399.557 - 24500.382: 99.4792% ( 2) 00:07:54.124 24500.382 - 24601.206: 99.4949% ( 2) 00:07:54.124 28230.892 - 28432.542: 99.5502% ( 7) 00:07:54.124 28432.542 - 28634.191: 99.6054% ( 7) 00:07:54.124 28634.191 - 28835.840: 99.6607% ( 7) 00:07:54.124 28835.840 - 29037.489: 99.7080% ( 6) 00:07:54.124 29037.489 - 29239.138: 99.7711% ( 8) 00:07:54.124 29239.138 - 29440.788: 99.8185% ( 6) 00:07:54.124 29440.788 - 29642.437: 99.8737% ( 7) 00:07:54.124 29642.437 - 29844.086: 99.9290% ( 7) 00:07:54.124 29844.086 - 30045.735: 99.9842% ( 7) 00:07:54.124 30045.735 - 30247.385: 100.0000% ( 2) 00:07:54.124 00:07:54.124 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:54.124 ============================================================================== 00:07:54.124 Range in us Cumulative IO count 00:07:54.124 6503.188 - 6553.600: 0.0316% ( 4) 00:07:54.124 6553.600 - 6604.012: 0.0868% ( 7) 00:07:54.124 6604.012 - 6654.425: 0.1420% ( 7) 00:07:54.124 6654.425 - 6704.837: 0.4498% ( 39) 00:07:54.124 6704.837 - 6755.249: 0.5761% ( 16) 00:07:54.124 6755.249 - 6805.662: 0.7734% ( 25) 00:07:54.124 6805.662 - 6856.074: 1.1364% ( 46) 00:07:54.124 6856.074 - 6906.486: 1.9255% ( 100) 00:07:54.124 6906.486 - 6956.898: 2.7620% ( 106) 00:07:54.124 6956.898 - 7007.311: 3.5827% ( 104) 00:07:54.124 7007.311 - 7057.723: 4.9164% ( 169) 00:07:54.124 7057.723 - 7108.135: 6.3920% ( 187) 00:07:54.124 7108.135 - 7158.548: 7.8598% ( 186) 00:07:54.124 7158.548 - 7208.960: 10.1326% ( 288) 00:07:54.124 7208.960 - 7259.372: 12.0028% ( 237) 00:07:54.124 7259.372 - 7309.785: 14.2914% ( 290) 00:07:54.124 7309.785 - 7360.197: 16.4852% ( 278) 00:07:54.124 7360.197 - 7410.609: 18.0556% ( 199) 00:07:54.124 7410.609 - 7461.022: 19.3419% ( 163) 00:07:54.124 7461.022 - 7511.434: 20.3441% ( 127) 00:07:54.124 7511.434 - 7561.846: 21.2042% ( 109) 00:07:54.124 7561.846 - 7612.258: 21.8513% ( 82) 00:07:54.124 7612.258 - 7662.671: 22.7036% ( 108) 00:07:54.124 7662.671 - 7713.083: 23.5795% ( 111) 00:07:54.124 7713.083 - 7763.495: 24.1241% ( 69) 00:07:54.124 7763.495 - 7813.908: 24.7396% ( 78) 00:07:54.124 7813.908 - 7864.320: 25.5208% ( 99) 00:07:54.124 7864.320 - 7914.732: 26.1679% ( 82) 00:07:54.124 7914.732 - 7965.145: 26.4520% ( 36) 00:07:54.124 7965.145 - 8015.557: 26.7756% ( 41) 00:07:54.124 8015.557 - 8065.969: 27.1465% ( 47) 00:07:54.124 8065.969 - 8116.382: 27.6831% ( 68) 00:07:54.124 8116.382 - 8166.794: 28.3144% ( 80) 00:07:54.124 8166.794 - 8217.206: 29.4981% ( 150) 00:07:54.124 8217.206 - 8267.618: 30.4451% ( 120) 00:07:54.124 8267.618 - 8318.031: 31.6604% ( 154) 00:07:54.124 8318.031 - 8368.443: 33.3886% ( 219) 00:07:54.124 8368.443 - 8418.855: 36.0164% ( 333) 00:07:54.124 8418.855 - 8469.268: 38.3207% ( 292) 00:07:54.124 8469.268 - 8519.680: 40.3883% ( 262) 00:07:54.124 8519.680 - 8570.092: 41.8797% ( 189) 00:07:54.124 8570.092 - 8620.505: 43.8210% ( 246) 00:07:54.124 8620.505 - 8670.917: 45.1389% ( 167) 00:07:54.124 8670.917 - 8721.329: 46.1411% ( 127) 00:07:54.124 8721.329 - 8771.742: 47.0565% ( 116) 00:07:54.124 8771.742 - 8822.154: 47.9009% ( 107) 00:07:54.124 8822.154 - 8872.566: 48.7295% ( 105) 00:07:54.124 8872.566 - 8922.978: 49.7554% ( 130) 00:07:54.124 8922.978 - 8973.391: 50.7812% ( 130) 00:07:54.124 8973.391 - 9023.803: 51.8939% ( 141) 00:07:54.124 9023.803 - 9074.215: 53.1171% ( 155) 00:07:54.124 9074.215 - 9124.628: 54.4034% ( 163) 00:07:54.124 9124.628 - 9175.040: 55.4451% ( 132) 00:07:54.124 9175.040 - 9225.452: 56.4631% ( 129) 00:07:54.124 9225.452 - 9275.865: 57.6310% ( 148) 00:07:54.124 9275.865 - 9326.277: 58.9094% ( 162) 00:07:54.124 9326.277 - 9376.689: 59.8722% ( 122) 00:07:54.124 9376.689 - 9427.102: 61.0480% ( 149) 00:07:54.124 9427.102 - 9477.514: 61.8450% ( 101) 00:07:54.124 9477.514 - 9527.926: 62.7289% ( 112) 00:07:54.124 9527.926 - 9578.338: 63.6364% ( 115) 00:07:54.124 9578.338 - 9628.751: 64.5439% ( 115) 00:07:54.124 9628.751 - 9679.163: 65.2699% ( 92) 00:07:54.124 9679.163 - 9729.575: 65.9012% ( 80) 00:07:54.124 9729.575 - 9779.988: 66.6430% ( 94) 00:07:54.124 9779.988 - 9830.400: 67.2112% ( 72) 00:07:54.124 9830.400 - 9880.812: 67.8662% ( 83) 00:07:54.124 9880.812 - 9931.225: 68.5133% ( 82) 00:07:54.124 9931.225 - 9981.637: 68.8920% ( 48) 00:07:54.124 9981.637 - 10032.049: 69.2472% ( 45) 00:07:54.124 10032.049 - 10082.462: 69.5391% ( 37) 00:07:54.124 10082.462 - 10132.874: 69.7759% ( 30) 00:07:54.124 10132.874 - 10183.286: 70.0205% ( 31) 00:07:54.124 10183.286 - 10233.698: 70.3362% ( 40) 00:07:54.124 10233.698 - 10284.111: 70.4940% ( 20) 00:07:54.124 10284.111 - 10334.523: 70.7071% ( 27) 00:07:54.124 10334.523 - 10384.935: 70.9201% ( 27) 00:07:54.124 10384.935 - 10435.348: 71.1253% ( 26) 00:07:54.124 10435.348 - 10485.760: 71.4252% ( 38) 00:07:54.124 10485.760 - 10536.172: 71.6304% ( 26) 00:07:54.124 10536.172 - 10586.585: 71.8277% ( 25) 00:07:54.124 10586.585 - 10636.997: 72.0565% ( 29) 00:07:54.124 10636.997 - 10687.409: 72.4353% ( 48) 00:07:54.124 10687.409 - 10737.822: 72.7036% ( 34) 00:07:54.124 10737.822 - 10788.234: 72.9246% ( 28) 00:07:54.124 10788.234 - 10838.646: 73.1218% ( 25) 00:07:54.124 10838.646 - 10889.058: 73.3033% ( 23) 00:07:54.124 10889.058 - 10939.471: 73.4612% ( 20) 00:07:54.124 10939.471 - 10989.883: 73.6111% ( 19) 00:07:54.124 10989.883 - 11040.295: 73.7374% ( 16) 00:07:54.124 11040.295 - 11090.708: 73.9189% ( 23) 00:07:54.124 11090.708 - 11141.120: 74.2345% ( 40) 00:07:54.124 11141.120 - 11191.532: 74.3766% ( 18) 00:07:54.124 11191.532 - 11241.945: 74.5107% ( 17) 00:07:54.124 11241.945 - 11292.357: 74.6449% ( 17) 00:07:54.124 11292.357 - 11342.769: 74.7948% ( 19) 00:07:54.124 11342.769 - 11393.182: 75.0473% ( 32) 00:07:54.124 11393.182 - 11443.594: 75.2210% ( 22) 00:07:54.124 11443.594 - 11494.006: 75.3867% ( 21) 00:07:54.124 11494.006 - 11544.418: 75.5682% ( 23) 00:07:54.124 11544.418 - 11594.831: 75.7655% ( 25) 00:07:54.124 11594.831 - 11645.243: 75.9943% ( 29) 00:07:54.124 11645.243 - 11695.655: 76.2153% ( 28) 00:07:54.124 11695.655 - 11746.068: 76.3889% ( 22) 00:07:54.124 11746.068 - 11796.480: 76.5073% ( 15) 00:07:54.124 11796.480 - 11846.892: 76.6414% ( 17) 00:07:54.124 11846.892 - 11897.305: 76.8150% ( 22) 00:07:54.124 11897.305 - 11947.717: 76.9492% ( 17) 00:07:54.124 11947.717 - 11998.129: 77.1070% ( 20) 00:07:54.124 11998.129 - 12048.542: 77.2648% ( 20) 00:07:54.124 12048.542 - 12098.954: 77.4937% ( 29) 00:07:54.124 12098.954 - 12149.366: 77.6910% ( 25) 00:07:54.124 12149.366 - 12199.778: 77.8567% ( 21) 00:07:54.124 12199.778 - 12250.191: 78.0934% ( 30) 00:07:54.124 12250.191 - 12300.603: 78.2513% ( 20) 00:07:54.124 12300.603 - 12351.015: 78.4091% ( 20) 00:07:54.124 12351.015 - 12401.428: 78.5985% ( 24) 00:07:54.124 12401.428 - 12451.840: 78.7642% ( 21) 00:07:54.124 12451.840 - 12502.252: 78.8747% ( 14) 00:07:54.124 12502.252 - 12552.665: 79.0720% ( 25) 00:07:54.124 12552.665 - 12603.077: 79.2614% ( 24) 00:07:54.124 12603.077 - 12653.489: 79.5218% ( 33) 00:07:54.124 12653.489 - 12703.902: 79.7270% ( 26) 00:07:54.124 12703.902 - 12754.314: 79.8769% ( 19) 00:07:54.124 12754.314 - 12804.726: 80.0032% ( 16) 00:07:54.124 12804.726 - 12855.138: 80.1452% ( 18) 00:07:54.124 12855.138 - 12905.551: 80.3109% ( 21) 00:07:54.124 12905.551 - 13006.375: 80.7449% ( 55) 00:07:54.124 13006.375 - 13107.200: 81.1553% ( 52) 00:07:54.124 13107.200 - 13208.025: 81.3842% ( 29) 00:07:54.124 13208.025 - 13308.849: 81.5893% ( 26) 00:07:54.124 13308.849 - 13409.674: 81.8655% ( 35) 00:07:54.124 13409.674 - 13510.498: 82.2206% ( 45) 00:07:54.124 13510.498 - 13611.323: 82.7257% ( 64) 00:07:54.124 13611.323 - 13712.148: 83.1834% ( 58) 00:07:54.124 13712.148 - 13812.972: 83.5701% ( 49) 00:07:54.124 13812.972 - 13913.797: 83.9725% ( 51) 00:07:54.124 13913.797 - 14014.622: 84.3513% ( 48) 00:07:54.124 14014.622 - 14115.446: 84.7932% ( 56) 00:07:54.124 14115.446 - 14216.271: 85.1484% ( 45) 00:07:54.124 14216.271 - 14317.095: 85.5193% ( 47) 00:07:54.124 14317.095 - 14417.920: 85.9927% ( 60) 00:07:54.124 14417.920 - 14518.745: 86.4662% ( 60) 00:07:54.124 14518.745 - 14619.569: 86.9555% ( 62) 00:07:54.124 14619.569 - 14720.394: 87.3974% ( 56) 00:07:54.124 14720.394 - 14821.218: 87.8157% ( 53) 00:07:54.124 14821.218 - 14922.043: 88.2655% ( 57) 00:07:54.124 14922.043 - 15022.868: 88.7153% ( 57) 00:07:54.124 15022.868 - 15123.692: 89.1572% ( 56) 00:07:54.124 15123.692 - 15224.517: 89.6070% ( 57) 00:07:54.124 15224.517 - 15325.342: 90.0726% ( 59) 00:07:54.124 15325.342 - 15426.166: 90.5698% ( 63) 00:07:54.124 15426.166 - 15526.991: 91.0827% ( 65) 00:07:54.124 15526.991 - 15627.815: 91.5404% ( 58) 00:07:54.124 15627.815 - 15728.640: 92.0218% ( 61) 00:07:54.124 15728.640 - 15829.465: 92.4321% ( 52) 00:07:54.124 15829.465 - 15930.289: 92.8504% ( 53) 00:07:54.124 15930.289 - 16031.114: 93.2528% ( 51) 00:07:54.124 16031.114 - 16131.938: 93.5448% ( 37) 00:07:54.124 16131.938 - 16232.763: 93.8210% ( 35) 00:07:54.124 16232.763 - 16333.588: 94.0814% ( 33) 00:07:54.124 16333.588 - 16434.412: 94.3497% ( 34) 00:07:54.124 16434.412 - 16535.237: 94.6181% ( 34) 00:07:54.124 16535.237 - 16636.062: 94.9179% ( 38) 00:07:54.124 16636.062 - 16736.886: 95.2257% ( 39) 00:07:54.124 16736.886 - 16837.711: 95.4309% ( 26) 00:07:54.124 16837.711 - 16938.535: 95.6203% ( 24) 00:07:54.125 16938.535 - 17039.360: 95.7781% ( 20) 00:07:54.125 17039.360 - 17140.185: 95.9596% ( 23) 00:07:54.125 17140.185 - 17241.009: 96.1016% ( 18) 00:07:54.125 17241.009 - 17341.834: 96.2279% ( 16) 00:07:54.125 17341.834 - 17442.658: 96.3226% ( 12) 00:07:54.125 17442.658 - 17543.483: 96.4489% ( 16) 00:07:54.125 17543.483 - 17644.308: 96.4962% ( 6) 00:07:54.125 17644.308 - 17745.132: 96.5436% ( 6) 00:07:54.125 17745.132 - 17845.957: 96.6304% ( 11) 00:07:54.125 17845.957 - 17946.782: 96.7330% ( 13) 00:07:54.125 17946.782 - 18047.606: 96.8040% ( 9) 00:07:54.125 18047.606 - 18148.431: 96.9381% ( 17) 00:07:54.125 18148.431 - 18249.255: 97.1039% ( 21) 00:07:54.125 18249.255 - 18350.080: 97.2696% ( 21) 00:07:54.125 18350.080 - 18450.905: 97.4116% ( 18) 00:07:54.125 18450.905 - 18551.729: 97.5300% ( 15) 00:07:54.125 18551.729 - 18652.554: 97.7036% ( 22) 00:07:54.125 18652.554 - 18753.378: 97.8930% ( 24) 00:07:54.125 18753.378 - 18854.203: 98.0982% ( 26) 00:07:54.125 18854.203 - 18955.028: 98.2086% ( 14) 00:07:54.125 18955.028 - 19055.852: 98.3507% ( 18) 00:07:54.125 19055.852 - 19156.677: 98.4691% ( 15) 00:07:54.125 19156.677 - 19257.502: 98.5638% ( 12) 00:07:54.125 19257.502 - 19358.326: 98.6664% ( 13) 00:07:54.125 19358.326 - 19459.151: 98.7689% ( 13) 00:07:54.125 19459.151 - 19559.975: 98.8794% ( 14) 00:07:54.125 19559.975 - 19660.800: 98.9662% ( 11) 00:07:54.125 19660.800 - 19761.625: 98.9899% ( 3) 00:07:54.125 21173.169 - 21273.994: 98.9978% ( 1) 00:07:54.125 21374.818 - 21475.643: 99.0767% ( 10) 00:07:54.125 21475.643 - 21576.468: 99.1635% ( 11) 00:07:54.125 21576.468 - 21677.292: 99.1872% ( 3) 00:07:54.125 21677.292 - 21778.117: 99.2109% ( 3) 00:07:54.125 21778.117 - 21878.942: 99.2266% ( 2) 00:07:54.125 21878.942 - 21979.766: 99.2503% ( 3) 00:07:54.125 21979.766 - 22080.591: 99.2740% ( 3) 00:07:54.125 22080.591 - 22181.415: 99.2977% ( 3) 00:07:54.125 22181.415 - 22282.240: 99.3213% ( 3) 00:07:54.125 22282.240 - 22383.065: 99.3529% ( 4) 00:07:54.125 22383.065 - 22483.889: 99.3845% ( 4) 00:07:54.125 22483.889 - 22584.714: 99.4081% ( 3) 00:07:54.125 22584.714 - 22685.538: 99.4397% ( 4) 00:07:54.125 22685.538 - 22786.363: 99.4713% ( 4) 00:07:54.125 22786.363 - 22887.188: 99.4949% ( 3) 00:07:54.125 27020.997 - 27222.646: 99.5186% ( 3) 00:07:54.125 27222.646 - 27424.295: 99.5739% ( 7) 00:07:54.125 27424.295 - 27625.945: 99.6291% ( 7) 00:07:54.125 27625.945 - 27827.594: 99.6843% ( 7) 00:07:54.125 27827.594 - 28029.243: 99.7317% ( 6) 00:07:54.125 28029.243 - 28230.892: 99.7948% ( 8) 00:07:54.125 28230.892 - 28432.542: 99.8501% ( 7) 00:07:54.125 28432.542 - 28634.191: 99.9132% ( 8) 00:07:54.125 28634.191 - 28835.840: 99.9684% ( 7) 00:07:54.125 28835.840 - 29037.489: 100.0000% ( 4) 00:07:54.125 00:07:54.125 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:54.125 ============================================================================== 00:07:54.125 Range in us Cumulative IO count 00:07:54.125 6452.775 - 6503.188: 0.0316% ( 4) 00:07:54.125 6503.188 - 6553.600: 0.0947% ( 8) 00:07:54.125 6553.600 - 6604.012: 0.1578% ( 8) 00:07:54.125 6604.012 - 6654.425: 0.2841% ( 16) 00:07:54.125 6654.425 - 6704.837: 0.3788% ( 12) 00:07:54.125 6704.837 - 6755.249: 0.5287% ( 19) 00:07:54.125 6755.249 - 6805.662: 0.6708% ( 18) 00:07:54.125 6805.662 - 6856.074: 1.0022% ( 42) 00:07:54.125 6856.074 - 6906.486: 1.6493% ( 82) 00:07:54.125 6906.486 - 6956.898: 2.5253% ( 111) 00:07:54.125 6956.898 - 7007.311: 3.2828% ( 96) 00:07:54.125 7007.311 - 7057.723: 4.6875% ( 178) 00:07:54.125 7057.723 - 7108.135: 6.0843% ( 177) 00:07:54.125 7108.135 - 7158.548: 7.5047% ( 180) 00:07:54.125 7158.548 - 7208.960: 10.0300% ( 320) 00:07:54.125 7208.960 - 7259.372: 11.9634% ( 245) 00:07:54.125 7259.372 - 7309.785: 14.5123% ( 323) 00:07:54.125 7309.785 - 7360.197: 16.2800% ( 224) 00:07:54.125 7360.197 - 7410.609: 17.9056% ( 206) 00:07:54.125 7410.609 - 7461.022: 19.0735% ( 148) 00:07:54.125 7461.022 - 7511.434: 20.3204% ( 158) 00:07:54.125 7511.434 - 7561.846: 21.2595% ( 119) 00:07:54.125 7561.846 - 7612.258: 22.2459% ( 125) 00:07:54.125 7612.258 - 7662.671: 22.9640% ( 91) 00:07:54.125 7662.671 - 7713.083: 23.7689% ( 102) 00:07:54.125 7713.083 - 7763.495: 24.6607% ( 113) 00:07:54.125 7763.495 - 7813.908: 25.0631% ( 51) 00:07:54.125 7813.908 - 7864.320: 25.5287% ( 59) 00:07:54.125 7864.320 - 7914.732: 26.1837% ( 83) 00:07:54.125 7914.732 - 7965.145: 26.6730% ( 62) 00:07:54.125 7965.145 - 8015.557: 27.0991% ( 54) 00:07:54.125 8015.557 - 8065.969: 27.6357% ( 68) 00:07:54.125 8065.969 - 8116.382: 28.3381% ( 89) 00:07:54.125 8116.382 - 8166.794: 29.0956% ( 96) 00:07:54.125 8166.794 - 8217.206: 30.0584% ( 122) 00:07:54.125 8217.206 - 8267.618: 31.4078% ( 171) 00:07:54.125 8267.618 - 8318.031: 32.4968% ( 138) 00:07:54.125 8318.031 - 8368.443: 33.6332% ( 144) 00:07:54.125 8368.443 - 8418.855: 35.5350% ( 241) 00:07:54.125 8418.855 - 8469.268: 37.5316% ( 253) 00:07:54.125 8469.268 - 8519.680: 39.3860% ( 235) 00:07:54.125 8519.680 - 8570.092: 41.4220% ( 258) 00:07:54.125 8570.092 - 8620.505: 43.4343% ( 255) 00:07:54.125 8620.505 - 8670.917: 44.4918% ( 134) 00:07:54.125 8670.917 - 8721.329: 45.7623% ( 161) 00:07:54.125 8721.329 - 8771.742: 46.8908% ( 143) 00:07:54.125 8771.742 - 8822.154: 48.1455% ( 159) 00:07:54.125 8822.154 - 8872.566: 49.1477% ( 127) 00:07:54.125 8872.566 - 8922.978: 50.1736% ( 130) 00:07:54.125 8922.978 - 8973.391: 51.2468% ( 136) 00:07:54.125 8973.391 - 9023.803: 52.3280% ( 137) 00:07:54.125 9023.803 - 9074.215: 53.2907% ( 122) 00:07:54.125 9074.215 - 9124.628: 54.3245% ( 131) 00:07:54.125 9124.628 - 9175.040: 55.2794% ( 121) 00:07:54.125 9175.040 - 9225.452: 56.3131% ( 131) 00:07:54.125 9225.452 - 9275.865: 57.4179% ( 140) 00:07:54.125 9275.865 - 9326.277: 58.5385% ( 142) 00:07:54.125 9326.277 - 9376.689: 59.6117% ( 136) 00:07:54.125 9376.689 - 9427.102: 61.1900% ( 200) 00:07:54.125 9427.102 - 9477.514: 61.9792% ( 100) 00:07:54.125 9477.514 - 9527.926: 62.8709% ( 113) 00:07:54.125 9527.926 - 9578.338: 63.7547% ( 112) 00:07:54.125 9578.338 - 9628.751: 64.4176% ( 84) 00:07:54.125 9628.751 - 9679.163: 65.1199% ( 89) 00:07:54.125 9679.163 - 9729.575: 65.7039% ( 74) 00:07:54.125 9729.575 - 9779.988: 66.3826% ( 86) 00:07:54.125 9779.988 - 9830.400: 67.2270% ( 107) 00:07:54.125 9830.400 - 9880.812: 67.9372% ( 90) 00:07:54.125 9880.812 - 9931.225: 68.5054% ( 72) 00:07:54.125 9931.225 - 9981.637: 68.8763% ( 47) 00:07:54.125 9981.637 - 10032.049: 69.2472% ( 47) 00:07:54.125 10032.049 - 10082.462: 69.4839% ( 30) 00:07:54.125 10082.462 - 10132.874: 69.6891% ( 26) 00:07:54.125 10132.874 - 10183.286: 69.8627% ( 22) 00:07:54.125 10183.286 - 10233.698: 70.0205% ( 20) 00:07:54.125 10233.698 - 10284.111: 70.2573% ( 30) 00:07:54.125 10284.111 - 10334.523: 70.4624% ( 26) 00:07:54.125 10334.523 - 10384.935: 70.6755% ( 27) 00:07:54.125 10384.935 - 10435.348: 70.9201% ( 31) 00:07:54.125 10435.348 - 10485.760: 71.2595% ( 43) 00:07:54.125 10485.760 - 10536.172: 71.7251% ( 59) 00:07:54.125 10536.172 - 10586.585: 72.1985% ( 60) 00:07:54.125 10586.585 - 10636.997: 72.4353% ( 30) 00:07:54.125 10636.997 - 10687.409: 72.6405% ( 26) 00:07:54.125 10687.409 - 10737.822: 72.9956% ( 45) 00:07:54.125 10737.822 - 10788.234: 73.1929% ( 25) 00:07:54.125 10788.234 - 10838.646: 73.3507% ( 20) 00:07:54.125 10838.646 - 10889.058: 73.4927% ( 18) 00:07:54.125 10889.058 - 10939.471: 73.6664% ( 22) 00:07:54.125 10939.471 - 10989.883: 73.8084% ( 18) 00:07:54.125 10989.883 - 11040.295: 74.0372% ( 29) 00:07:54.125 11040.295 - 11090.708: 74.1872% ( 19) 00:07:54.125 11090.708 - 11141.120: 74.2661% ( 10) 00:07:54.125 11141.120 - 11191.532: 74.3450% ( 10) 00:07:54.125 11191.532 - 11241.945: 74.4476% ( 13) 00:07:54.125 11241.945 - 11292.357: 74.5739% ( 16) 00:07:54.125 11292.357 - 11342.769: 74.6449% ( 9) 00:07:54.125 11342.769 - 11393.182: 74.7475% ( 13) 00:07:54.125 11393.182 - 11443.594: 74.8816% ( 17) 00:07:54.126 11443.594 - 11494.006: 75.0079% ( 16) 00:07:54.126 11494.006 - 11544.418: 75.0947% ( 11) 00:07:54.126 11544.418 - 11594.831: 75.2131% ( 15) 00:07:54.126 11594.831 - 11645.243: 75.3630% ( 19) 00:07:54.126 11645.243 - 11695.655: 75.5129% ( 19) 00:07:54.126 11695.655 - 11746.068: 75.7812% ( 34) 00:07:54.126 11746.068 - 11796.480: 75.9075% ( 16) 00:07:54.126 11796.480 - 11846.892: 76.0101% ( 13) 00:07:54.126 11846.892 - 11897.305: 76.0890% ( 10) 00:07:54.126 11897.305 - 11947.717: 76.1995% ( 14) 00:07:54.126 11947.717 - 11998.129: 76.3652% ( 21) 00:07:54.126 11998.129 - 12048.542: 76.5230% ( 20) 00:07:54.126 12048.542 - 12098.954: 76.7440% ( 28) 00:07:54.126 12098.954 - 12149.366: 76.9807% ( 30) 00:07:54.126 12149.366 - 12199.778: 77.2569% ( 35) 00:07:54.126 12199.778 - 12250.191: 77.5016% ( 31) 00:07:54.126 12250.191 - 12300.603: 77.7541% ( 32) 00:07:54.126 12300.603 - 12351.015: 78.0461% ( 37) 00:07:54.126 12351.015 - 12401.428: 78.2828% ( 30) 00:07:54.126 12401.428 - 12451.840: 78.6143% ( 42) 00:07:54.126 12451.840 - 12502.252: 78.9062% ( 37) 00:07:54.126 12502.252 - 12552.665: 79.1430% ( 30) 00:07:54.126 12552.665 - 12603.077: 79.3718% ( 29) 00:07:54.126 12603.077 - 12653.489: 79.5297% ( 20) 00:07:54.126 12653.489 - 12703.902: 79.7427% ( 27) 00:07:54.126 12703.902 - 12754.314: 79.9479% ( 26) 00:07:54.126 12754.314 - 12804.726: 80.1373% ( 24) 00:07:54.126 12804.726 - 12855.138: 80.3662% ( 29) 00:07:54.126 12855.138 - 12905.551: 80.5871% ( 28) 00:07:54.126 12905.551 - 13006.375: 81.0606% ( 60) 00:07:54.126 13006.375 - 13107.200: 81.4078% ( 44) 00:07:54.126 13107.200 - 13208.025: 81.6446% ( 30) 00:07:54.126 13208.025 - 13308.849: 81.8576% ( 27) 00:07:54.126 13308.849 - 13409.674: 82.0312% ( 22) 00:07:54.126 13409.674 - 13510.498: 82.2285% ( 25) 00:07:54.126 13510.498 - 13611.323: 82.5442% ( 40) 00:07:54.126 13611.323 - 13712.148: 82.9545% ( 52) 00:07:54.126 13712.148 - 13812.972: 83.2939% ( 43) 00:07:54.126 13812.972 - 13913.797: 83.6174% ( 41) 00:07:54.126 13913.797 - 14014.622: 83.8778% ( 33) 00:07:54.126 14014.622 - 14115.446: 84.1540% ( 35) 00:07:54.126 14115.446 - 14216.271: 84.5328% ( 48) 00:07:54.126 14216.271 - 14317.095: 84.9511% ( 53) 00:07:54.126 14317.095 - 14417.920: 85.4482% ( 63) 00:07:54.126 14417.920 - 14518.745: 86.0085% ( 71) 00:07:54.126 14518.745 - 14619.569: 86.7030% ( 88) 00:07:54.126 14619.569 - 14720.394: 87.2790% ( 73) 00:07:54.126 14720.394 - 14821.218: 87.7131% ( 55) 00:07:54.126 14821.218 - 14922.043: 88.2812% ( 72) 00:07:54.126 14922.043 - 15022.868: 88.7705% ( 62) 00:07:54.126 15022.868 - 15123.692: 89.2677% ( 63) 00:07:54.126 15123.692 - 15224.517: 89.6544% ( 49) 00:07:54.126 15224.517 - 15325.342: 90.0805% ( 54) 00:07:54.126 15325.342 - 15426.166: 90.7355% ( 83) 00:07:54.126 15426.166 - 15526.991: 91.3037% ( 72) 00:07:54.126 15526.991 - 15627.815: 91.8482% ( 69) 00:07:54.126 15627.815 - 15728.640: 92.3769% ( 67) 00:07:54.126 15728.640 - 15829.465: 92.9372% ( 71) 00:07:54.126 15829.465 - 15930.289: 93.2844% ( 44) 00:07:54.126 15930.289 - 16031.114: 93.6080% ( 41) 00:07:54.126 16031.114 - 16131.938: 93.8763% ( 34) 00:07:54.126 16131.938 - 16232.763: 94.1446% ( 34) 00:07:54.126 16232.763 - 16333.588: 94.3971% ( 32) 00:07:54.126 16333.588 - 16434.412: 94.6417% ( 31) 00:07:54.126 16434.412 - 16535.237: 94.9337% ( 37) 00:07:54.126 16535.237 - 16636.062: 95.1231% ( 24) 00:07:54.126 16636.062 - 16736.886: 95.2494% ( 16) 00:07:54.126 16736.886 - 16837.711: 95.3598% ( 14) 00:07:54.126 16837.711 - 16938.535: 95.5414% ( 23) 00:07:54.126 16938.535 - 17039.360: 95.6913% ( 19) 00:07:54.126 17039.360 - 17140.185: 95.8333% ( 18) 00:07:54.126 17140.185 - 17241.009: 95.9517% ( 15) 00:07:54.126 17241.009 - 17341.834: 96.0780% ( 16) 00:07:54.126 17341.834 - 17442.658: 96.1727% ( 12) 00:07:54.126 17442.658 - 17543.483: 96.2358% ( 8) 00:07:54.126 17543.483 - 17644.308: 96.3463% ( 14) 00:07:54.126 17644.308 - 17745.132: 96.4883% ( 18) 00:07:54.126 17745.132 - 17845.957: 96.6619% ( 22) 00:07:54.126 17845.957 - 17946.782: 96.8355% ( 22) 00:07:54.126 17946.782 - 18047.606: 96.9145% ( 10) 00:07:54.126 18047.606 - 18148.431: 97.0723% ( 20) 00:07:54.126 18148.431 - 18249.255: 97.2617% ( 24) 00:07:54.126 18249.255 - 18350.080: 97.3958% ( 17) 00:07:54.126 18350.080 - 18450.905: 97.5616% ( 21) 00:07:54.126 18450.905 - 18551.729: 97.7036% ( 18) 00:07:54.126 18551.729 - 18652.554: 97.8535% ( 19) 00:07:54.126 18652.554 - 18753.378: 98.0193% ( 21) 00:07:54.126 18753.378 - 18854.203: 98.1376% ( 15) 00:07:54.126 18854.203 - 18955.028: 98.2560% ( 15) 00:07:54.126 18955.028 - 19055.852: 98.3902% ( 17) 00:07:54.126 19055.852 - 19156.677: 98.4848% ( 12) 00:07:54.126 19156.677 - 19257.502: 98.5559% ( 9) 00:07:54.126 19257.502 - 19358.326: 98.6190% ( 8) 00:07:54.126 19358.326 - 19459.151: 98.6900% ( 9) 00:07:54.126 19459.151 - 19559.975: 98.7295% ( 5) 00:07:54.126 19559.975 - 19660.800: 98.7768% ( 6) 00:07:54.126 19660.800 - 19761.625: 98.8557% ( 10) 00:07:54.126 19761.625 - 19862.449: 98.9268% ( 9) 00:07:54.126 19862.449 - 19963.274: 98.9583% ( 4) 00:07:54.126 19963.274 - 20064.098: 99.0215% ( 8) 00:07:54.126 20064.098 - 20164.923: 99.0609% ( 5) 00:07:54.126 20164.923 - 20265.748: 99.1162% ( 7) 00:07:54.126 20265.748 - 20366.572: 99.1556% ( 5) 00:07:54.126 20366.572 - 20467.397: 99.2030% ( 6) 00:07:54.126 20467.397 - 20568.222: 99.2266% ( 3) 00:07:54.126 20568.222 - 20669.046: 99.2503% ( 3) 00:07:54.126 20669.046 - 20769.871: 99.2740% ( 3) 00:07:54.126 20769.871 - 20870.695: 99.2977% ( 3) 00:07:54.126 20870.695 - 20971.520: 99.3292% ( 4) 00:07:54.126 20971.520 - 21072.345: 99.3529% ( 3) 00:07:54.126 21072.345 - 21173.169: 99.3766% ( 3) 00:07:54.126 21173.169 - 21273.994: 99.4003% ( 3) 00:07:54.126 21273.994 - 21374.818: 99.4239% ( 3) 00:07:54.126 21374.818 - 21475.643: 99.4476% ( 3) 00:07:54.126 21475.643 - 21576.468: 99.4792% ( 4) 00:07:54.126 21576.468 - 21677.292: 99.4949% ( 2) 00:07:54.126 24702.031 - 24802.855: 99.5028% ( 1) 00:07:54.126 24802.855 - 24903.680: 99.5344% ( 4) 00:07:54.126 24903.680 - 25004.505: 99.5660% ( 4) 00:07:54.126 25004.505 - 25105.329: 99.6212% ( 7) 00:07:54.126 25105.329 - 25206.154: 99.6370% ( 2) 00:07:54.126 25811.102 - 26012.751: 99.6607% ( 3) 00:07:54.126 26012.751 - 26214.400: 99.7159% ( 7) 00:07:54.126 26214.400 - 26416.049: 99.7633% ( 6) 00:07:54.126 26416.049 - 26617.698: 99.8185% ( 7) 00:07:54.126 26617.698 - 26819.348: 99.8658% ( 6) 00:07:54.126 26819.348 - 27020.997: 99.9211% ( 7) 00:07:54.126 27020.997 - 27222.646: 99.9763% ( 7) 00:07:54.126 27222.646 - 27424.295: 100.0000% ( 3) 00:07:54.126 00:07:54.126 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:54.126 ============================================================================== 00:07:54.126 Range in us Cumulative IO count 00:07:54.126 6225.920 - 6251.126: 0.0079% ( 1) 00:07:54.126 6427.569 - 6452.775: 0.0237% ( 2) 00:07:54.126 6452.775 - 6503.188: 0.0631% ( 5) 00:07:54.126 6503.188 - 6553.600: 0.1420% ( 10) 00:07:54.126 6553.600 - 6604.012: 0.2131% ( 9) 00:07:54.126 6604.012 - 6654.425: 0.3788% ( 21) 00:07:54.126 6654.425 - 6704.837: 0.4577% ( 10) 00:07:54.126 6704.837 - 6755.249: 0.5997% ( 18) 00:07:54.126 6755.249 - 6805.662: 0.8128% ( 27) 00:07:54.126 6805.662 - 6856.074: 1.2153% ( 51) 00:07:54.126 6856.074 - 6906.486: 1.6730% ( 58) 00:07:54.126 6906.486 - 6956.898: 2.4227% ( 95) 00:07:54.126 6956.898 - 7007.311: 3.3539% ( 118) 00:07:54.126 7007.311 - 7057.723: 4.6402% ( 163) 00:07:54.126 7057.723 - 7108.135: 6.5814% ( 246) 00:07:54.126 7108.135 - 7158.548: 8.0414% ( 185) 00:07:54.126 7158.548 - 7208.960: 10.1247% ( 264) 00:07:54.126 7208.960 - 7259.372: 12.3580% ( 283) 00:07:54.126 7259.372 - 7309.785: 14.1256% ( 224) 00:07:54.126 7309.785 - 7360.197: 15.9564% ( 232) 00:07:54.126 7360.197 - 7410.609: 18.0240% ( 262) 00:07:54.126 7410.609 - 7461.022: 19.3024% ( 162) 00:07:54.126 7461.022 - 7511.434: 20.2967% ( 126) 00:07:54.126 7511.434 - 7561.846: 21.1648% ( 110) 00:07:54.126 7561.846 - 7612.258: 21.8119% ( 82) 00:07:54.126 7612.258 - 7662.671: 22.5142% ( 89) 00:07:54.126 7662.671 - 7713.083: 22.9956% ( 61) 00:07:54.126 7713.083 - 7763.495: 23.4691% ( 60) 00:07:54.126 7763.495 - 7813.908: 24.1398% ( 85) 00:07:54.126 7813.908 - 7864.320: 25.1026% ( 122) 00:07:54.126 7864.320 - 7914.732: 26.0022% ( 114) 00:07:54.126 7914.732 - 7965.145: 26.4520% ( 57) 00:07:54.126 7965.145 - 8015.557: 26.9729% ( 66) 00:07:54.126 8015.557 - 8065.969: 27.7936% ( 104) 00:07:54.126 8065.969 - 8116.382: 28.4643% ( 85) 00:07:54.126 8116.382 - 8166.794: 29.2219% ( 96) 00:07:54.126 8166.794 - 8217.206: 30.1057% ( 112) 00:07:54.126 8217.206 - 8267.618: 31.2184% ( 141) 00:07:54.126 8267.618 - 8318.031: 32.4732% ( 159) 00:07:54.126 8318.031 - 8368.443: 33.7279% ( 159) 00:07:54.126 8368.443 - 8418.855: 35.9059% ( 276) 00:07:54.126 8418.855 - 8469.268: 37.9656% ( 261) 00:07:54.126 8469.268 - 8519.680: 40.0331% ( 262) 00:07:54.126 8519.680 - 8570.092: 41.4378% ( 178) 00:07:54.126 8570.092 - 8620.505: 43.0398% ( 203) 00:07:54.126 8620.505 - 8670.917: 44.1130% ( 136) 00:07:54.126 8670.917 - 8721.329: 45.2573% ( 145) 00:07:54.126 8721.329 - 8771.742: 46.6146% ( 172) 00:07:54.126 8771.742 - 8822.154: 47.9482% ( 169) 00:07:54.126 8822.154 - 8872.566: 48.7768% ( 105) 00:07:54.126 8872.566 - 8922.978: 49.8343% ( 134) 00:07:54.126 8922.978 - 8973.391: 50.7181% ( 112) 00:07:54.126 8973.391 - 9023.803: 51.9650% ( 158) 00:07:54.127 9023.803 - 9074.215: 53.3144% ( 171) 00:07:54.127 9074.215 - 9124.628: 54.2219% ( 115) 00:07:54.127 9124.628 - 9175.040: 55.3819% ( 147) 00:07:54.127 9175.040 - 9225.452: 56.5420% ( 147) 00:07:54.127 9225.452 - 9275.865: 57.6073% ( 135) 00:07:54.127 9275.865 - 9326.277: 58.6648% ( 134) 00:07:54.127 9326.277 - 9376.689: 59.8327% ( 148) 00:07:54.127 9376.689 - 9427.102: 61.0559% ( 155) 00:07:54.127 9427.102 - 9477.514: 61.9160% ( 109) 00:07:54.127 9477.514 - 9527.926: 62.7604% ( 107) 00:07:54.127 9527.926 - 9578.338: 63.7153% ( 121) 00:07:54.127 9578.338 - 9628.751: 64.5518% ( 106) 00:07:54.127 9628.751 - 9679.163: 65.4593% ( 115) 00:07:54.127 9679.163 - 9729.575: 66.0985% ( 81) 00:07:54.127 9729.575 - 9779.988: 66.7456% ( 82) 00:07:54.127 9779.988 - 9830.400: 67.3138% ( 72) 00:07:54.127 9830.400 - 9880.812: 67.9056% ( 75) 00:07:54.127 9880.812 - 9931.225: 68.3081% ( 51) 00:07:54.127 9931.225 - 9981.637: 68.7737% ( 59) 00:07:54.127 9981.637 - 10032.049: 69.0420% ( 34) 00:07:54.127 10032.049 - 10082.462: 69.3182% ( 35) 00:07:54.127 10082.462 - 10132.874: 69.5628% ( 31) 00:07:54.127 10132.874 - 10183.286: 69.7522% ( 24) 00:07:54.127 10183.286 - 10233.698: 69.9574% ( 26) 00:07:54.127 10233.698 - 10284.111: 70.1862% ( 29) 00:07:54.127 10284.111 - 10334.523: 70.3283% ( 18) 00:07:54.127 10334.523 - 10384.935: 70.5019% ( 22) 00:07:54.127 10384.935 - 10435.348: 70.7307% ( 29) 00:07:54.127 10435.348 - 10485.760: 70.9912% ( 33) 00:07:54.127 10485.760 - 10536.172: 71.2358% ( 31) 00:07:54.127 10536.172 - 10586.585: 71.4173% ( 23) 00:07:54.127 10586.585 - 10636.997: 71.5830% ( 21) 00:07:54.127 10636.997 - 10687.409: 71.7330% ( 19) 00:07:54.127 10687.409 - 10737.822: 71.9460% ( 27) 00:07:54.127 10737.822 - 10788.234: 72.2538% ( 39) 00:07:54.127 10788.234 - 10838.646: 72.4195% ( 21) 00:07:54.127 10838.646 - 10889.058: 72.5616% ( 18) 00:07:54.127 10889.058 - 10939.471: 72.8062% ( 31) 00:07:54.127 10939.471 - 10989.883: 73.1061% ( 38) 00:07:54.127 10989.883 - 11040.295: 73.2955% ( 24) 00:07:54.127 11040.295 - 11090.708: 73.5243% ( 29) 00:07:54.127 11090.708 - 11141.120: 73.7058% ( 23) 00:07:54.127 11141.120 - 11191.532: 73.9978% ( 37) 00:07:54.127 11191.532 - 11241.945: 74.1635% ( 21) 00:07:54.127 11241.945 - 11292.357: 74.3056% ( 18) 00:07:54.127 11292.357 - 11342.769: 74.4634% ( 20) 00:07:54.127 11342.769 - 11393.182: 74.6054% ( 18) 00:07:54.127 11393.182 - 11443.594: 74.8185% ( 27) 00:07:54.127 11443.594 - 11494.006: 75.0395% ( 28) 00:07:54.127 11494.006 - 11544.418: 75.2131% ( 22) 00:07:54.127 11544.418 - 11594.831: 75.3314% ( 15) 00:07:54.127 11594.831 - 11645.243: 75.4340% ( 13) 00:07:54.127 11645.243 - 11695.655: 75.5287% ( 12) 00:07:54.127 11695.655 - 11746.068: 75.6471% ( 15) 00:07:54.127 11746.068 - 11796.480: 75.7970% ( 19) 00:07:54.127 11796.480 - 11846.892: 76.0417% ( 31) 00:07:54.127 11846.892 - 11897.305: 76.1995% ( 20) 00:07:54.127 11897.305 - 11947.717: 76.3573% ( 20) 00:07:54.127 11947.717 - 11998.129: 76.5467% ( 24) 00:07:54.127 11998.129 - 12048.542: 76.7361% ( 24) 00:07:54.127 12048.542 - 12098.954: 76.9176% ( 23) 00:07:54.127 12098.954 - 12149.366: 77.1780% ( 33) 00:07:54.127 12149.366 - 12199.778: 77.4621% ( 36) 00:07:54.127 12199.778 - 12250.191: 77.6673% ( 26) 00:07:54.127 12250.191 - 12300.603: 77.8015% ( 17) 00:07:54.127 12300.603 - 12351.015: 77.8961% ( 12) 00:07:54.127 12351.015 - 12401.428: 78.0066% ( 14) 00:07:54.127 12401.428 - 12451.840: 78.1566% ( 19) 00:07:54.127 12451.840 - 12502.252: 78.3144% ( 20) 00:07:54.127 12502.252 - 12552.665: 78.4564% ( 18) 00:07:54.127 12552.665 - 12603.077: 78.6695% ( 27) 00:07:54.127 12603.077 - 12653.489: 78.8589% ( 24) 00:07:54.127 12653.489 - 12703.902: 79.0641% ( 26) 00:07:54.127 12703.902 - 12754.314: 79.2219% ( 20) 00:07:54.127 12754.314 - 12804.726: 79.4350% ( 27) 00:07:54.127 12804.726 - 12855.138: 79.7191% ( 36) 00:07:54.127 12855.138 - 12905.551: 80.0268% ( 39) 00:07:54.127 12905.551 - 13006.375: 80.5082% ( 61) 00:07:54.127 13006.375 - 13107.200: 80.9343% ( 54) 00:07:54.127 13107.200 - 13208.025: 81.3131% ( 48) 00:07:54.127 13208.025 - 13308.849: 81.7945% ( 61) 00:07:54.127 13308.849 - 13409.674: 82.0549% ( 33) 00:07:54.127 13409.674 - 13510.498: 82.2128% ( 20) 00:07:54.127 13510.498 - 13611.323: 82.4337% ( 28) 00:07:54.127 13611.323 - 13712.148: 82.6862% ( 32) 00:07:54.127 13712.148 - 13812.972: 83.0808% ( 50) 00:07:54.127 13812.972 - 13913.797: 83.6016% ( 66) 00:07:54.127 13913.797 - 14014.622: 84.0830% ( 61) 00:07:54.127 14014.622 - 14115.446: 84.3908% ( 39) 00:07:54.127 14115.446 - 14216.271: 84.8406% ( 57) 00:07:54.127 14216.271 - 14317.095: 85.2352% ( 50) 00:07:54.127 14317.095 - 14417.920: 85.6534% ( 53) 00:07:54.127 14417.920 - 14518.745: 86.1190% ( 59) 00:07:54.127 14518.745 - 14619.569: 86.6319% ( 65) 00:07:54.127 14619.569 - 14720.394: 87.1607% ( 67) 00:07:54.127 14720.394 - 14821.218: 87.6184% ( 58) 00:07:54.127 14821.218 - 14922.043: 88.2023% ( 74) 00:07:54.127 14922.043 - 15022.868: 88.7468% ( 69) 00:07:54.127 15022.868 - 15123.692: 89.3308% ( 74) 00:07:54.127 15123.692 - 15224.517: 89.8201% ( 62) 00:07:54.127 15224.517 - 15325.342: 90.3330% ( 65) 00:07:54.127 15325.342 - 15426.166: 90.8933% ( 71) 00:07:54.127 15426.166 - 15526.991: 91.4141% ( 66) 00:07:54.127 15526.991 - 15627.815: 91.9508% ( 68) 00:07:54.127 15627.815 - 15728.640: 92.4400% ( 62) 00:07:54.127 15728.640 - 15829.465: 92.8662% ( 54) 00:07:54.127 15829.465 - 15930.289: 93.2844% ( 53) 00:07:54.127 15930.289 - 16031.114: 93.6001% ( 40) 00:07:54.127 16031.114 - 16131.938: 93.8526% ( 32) 00:07:54.127 16131.938 - 16232.763: 94.1367% ( 36) 00:07:54.127 16232.763 - 16333.588: 94.4208% ( 36) 00:07:54.127 16333.588 - 16434.412: 94.6181% ( 25) 00:07:54.127 16434.412 - 16535.237: 94.8785% ( 33) 00:07:54.127 16535.237 - 16636.062: 95.1073% ( 29) 00:07:54.127 16636.062 - 16736.886: 95.3046% ( 25) 00:07:54.127 16736.886 - 16837.711: 95.4624% ( 20) 00:07:54.127 16837.711 - 16938.535: 95.5335% ( 9) 00:07:54.127 16938.535 - 17039.360: 95.5887% ( 7) 00:07:54.127 17039.360 - 17140.185: 95.6676% ( 10) 00:07:54.127 17140.185 - 17241.009: 95.9201% ( 32) 00:07:54.127 17241.009 - 17341.834: 96.2989% ( 48) 00:07:54.127 17341.834 - 17442.658: 96.4962% ( 25) 00:07:54.127 17442.658 - 17543.483: 96.7093% ( 27) 00:07:54.127 17543.483 - 17644.308: 96.8908% ( 23) 00:07:54.127 17644.308 - 17745.132: 97.0723% ( 23) 00:07:54.127 17745.132 - 17845.957: 97.2617% ( 24) 00:07:54.127 17845.957 - 17946.782: 97.4590% ( 25) 00:07:54.127 17946.782 - 18047.606: 97.6641% ( 26) 00:07:54.127 18047.606 - 18148.431: 97.8220% ( 20) 00:07:54.127 18148.431 - 18249.255: 97.9798% ( 20) 00:07:54.127 18249.255 - 18350.080: 98.0824% ( 13) 00:07:54.127 18350.080 - 18450.905: 98.1613% ( 10) 00:07:54.127 18450.905 - 18551.729: 98.2797% ( 15) 00:07:54.127 18551.729 - 18652.554: 98.4138% ( 17) 00:07:54.127 18652.554 - 18753.378: 98.5085% ( 12) 00:07:54.127 18753.378 - 18854.203: 98.5717% ( 8) 00:07:54.127 18854.203 - 18955.028: 98.6427% ( 9) 00:07:54.127 18955.028 - 19055.852: 98.6900% ( 6) 00:07:54.127 19055.852 - 19156.677: 98.7532% ( 8) 00:07:54.127 19156.677 - 19257.502: 98.8321% ( 10) 00:07:54.127 19257.502 - 19358.326: 98.9031% ( 9) 00:07:54.127 19358.326 - 19459.151: 98.9662% ( 8) 00:07:54.127 19459.151 - 19559.975: 99.0294% ( 8) 00:07:54.127 19559.975 - 19660.800: 99.0925% ( 8) 00:07:54.127 19660.800 - 19761.625: 99.1398% ( 6) 00:07:54.127 19761.625 - 19862.449: 99.1951% ( 7) 00:07:54.127 19862.449 - 19963.274: 99.2740% ( 10) 00:07:54.127 19963.274 - 20064.098: 99.3687% ( 12) 00:07:54.127 20064.098 - 20164.923: 99.4081% ( 5) 00:07:54.127 20164.923 - 20265.748: 99.4318% ( 3) 00:07:54.127 20265.748 - 20366.572: 99.4555% ( 3) 00:07:54.127 20366.572 - 20467.397: 99.4792% ( 3) 00:07:54.127 20467.397 - 20568.222: 99.4949% ( 2) 00:07:54.127 23189.662 - 23290.486: 99.5028% ( 1) 00:07:54.127 23290.486 - 23391.311: 99.5186% ( 2) 00:07:54.127 23391.311 - 23492.135: 99.5502% ( 4) 00:07:54.127 23492.135 - 23592.960: 99.6291% ( 10) 00:07:54.127 23592.960 - 23693.785: 99.6528% ( 3) 00:07:54.127 24399.557 - 24500.382: 99.6843% ( 4) 00:07:54.127 24500.382 - 24601.206: 99.7001% ( 2) 00:07:54.127 24601.206 - 24702.031: 99.7238% ( 3) 00:07:54.127 24702.031 - 24802.855: 99.7475% ( 3) 00:07:54.127 24802.855 - 24903.680: 99.7790% ( 4) 00:07:54.127 24903.680 - 25004.505: 99.8027% ( 3) 00:07:54.127 25004.505 - 25105.329: 99.8343% ( 4) 00:07:54.127 25105.329 - 25206.154: 99.8580% ( 3) 00:07:54.127 25206.154 - 25306.978: 99.8895% ( 4) 00:07:54.127 25306.978 - 25407.803: 99.9132% ( 3) 00:07:54.127 25407.803 - 25508.628: 99.9369% ( 3) 00:07:54.127 25508.628 - 25609.452: 99.9684% ( 4) 00:07:54.127 25609.452 - 25710.277: 99.9921% ( 3) 00:07:54.127 25710.277 - 25811.102: 100.0000% ( 1) 00:07:54.127 00:07:54.127 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:54.127 ============================================================================== 00:07:54.127 Range in us Cumulative IO count 00:07:54.127 6351.951 - 6377.157: 0.0079% ( 1) 00:07:54.127 6427.569 - 6452.775: 0.0237% ( 2) 00:07:54.127 6452.775 - 6503.188: 0.0552% ( 4) 00:07:54.127 6503.188 - 6553.600: 0.0868% ( 4) 00:07:54.127 6553.600 - 6604.012: 0.1263% ( 5) 00:07:54.127 6604.012 - 6654.425: 0.1578% ( 4) 00:07:54.127 6654.425 - 6704.837: 0.4340% ( 35) 00:07:54.127 6704.837 - 6755.249: 0.5287% ( 12) 00:07:54.127 6755.249 - 6805.662: 0.6866% ( 20) 00:07:54.128 6805.662 - 6856.074: 1.1916% ( 64) 00:07:54.128 6856.074 - 6906.486: 1.6809% ( 62) 00:07:54.128 6906.486 - 6956.898: 2.5174% ( 106) 00:07:54.128 6956.898 - 7007.311: 3.6143% ( 139) 00:07:54.128 7007.311 - 7057.723: 5.0584% ( 183) 00:07:54.128 7057.723 - 7108.135: 7.2364% ( 276) 00:07:54.128 7108.135 - 7158.548: 8.7989% ( 198) 00:07:54.128 7158.548 - 7208.960: 11.0164% ( 281) 00:07:54.128 7208.960 - 7259.372: 13.2023% ( 277) 00:07:54.128 7259.372 - 7309.785: 14.9542% ( 222) 00:07:54.128 7309.785 - 7360.197: 16.7614% ( 229) 00:07:54.128 7360.197 - 7410.609: 18.3949% ( 207) 00:07:54.128 7410.609 - 7461.022: 19.3419% ( 120) 00:07:54.128 7461.022 - 7511.434: 20.2415% ( 114) 00:07:54.128 7511.434 - 7561.846: 21.0780% ( 106) 00:07:54.128 7561.846 - 7612.258: 21.6619% ( 74) 00:07:54.128 7612.258 - 7662.671: 22.3485% ( 87) 00:07:54.128 7662.671 - 7713.083: 23.0429% ( 88) 00:07:54.128 7713.083 - 7763.495: 23.4217% ( 48) 00:07:54.128 7763.495 - 7813.908: 23.8005% ( 48) 00:07:54.128 7813.908 - 7864.320: 24.5265% ( 92) 00:07:54.128 7864.320 - 7914.732: 25.4261% ( 114) 00:07:54.128 7914.732 - 7965.145: 25.8602% ( 55) 00:07:54.128 7965.145 - 8015.557: 26.4283% ( 72) 00:07:54.128 8015.557 - 8065.969: 27.1386% ( 90) 00:07:54.128 8065.969 - 8116.382: 27.9356% ( 101) 00:07:54.128 8116.382 - 8166.794: 28.7405% ( 102) 00:07:54.128 8166.794 - 8217.206: 30.2083% ( 186) 00:07:54.128 8217.206 - 8267.618: 31.3131% ( 140) 00:07:54.128 8267.618 - 8318.031: 33.0414% ( 219) 00:07:54.128 8318.031 - 8368.443: 34.0830% ( 132) 00:07:54.128 8368.443 - 8418.855: 36.2374% ( 273) 00:07:54.128 8418.855 - 8469.268: 38.3444% ( 267) 00:07:54.128 8469.268 - 8519.680: 40.5382% ( 278) 00:07:54.128 8519.680 - 8570.092: 41.9271% ( 176) 00:07:54.128 8570.092 - 8620.505: 43.1976% ( 161) 00:07:54.128 8620.505 - 8670.917: 44.2708% ( 136) 00:07:54.128 8670.917 - 8721.329: 45.2967% ( 130) 00:07:54.128 8721.329 - 8771.742: 46.0306% ( 93) 00:07:54.128 8771.742 - 8822.154: 46.6777% ( 82) 00:07:54.128 8822.154 - 8872.566: 47.6641% ( 125) 00:07:54.128 8872.566 - 8922.978: 49.1083% ( 183) 00:07:54.128 8922.978 - 8973.391: 49.9842% ( 111) 00:07:54.128 8973.391 - 9023.803: 51.7440% ( 223) 00:07:54.128 9023.803 - 9074.215: 53.2592% ( 192) 00:07:54.128 9074.215 - 9124.628: 54.2929% ( 131) 00:07:54.128 9124.628 - 9175.040: 55.3504% ( 134) 00:07:54.128 9175.040 - 9225.452: 56.4631% ( 141) 00:07:54.128 9225.452 - 9275.865: 57.6152% ( 146) 00:07:54.128 9275.865 - 9326.277: 59.0041% ( 176) 00:07:54.128 9326.277 - 9376.689: 60.1799% ( 149) 00:07:54.128 9376.689 - 9427.102: 61.2926% ( 141) 00:07:54.128 9427.102 - 9477.514: 62.0265% ( 93) 00:07:54.128 9477.514 - 9527.926: 62.8472% ( 104) 00:07:54.128 9527.926 - 9578.338: 63.5574% ( 90) 00:07:54.128 9578.338 - 9628.751: 64.2992% ( 94) 00:07:54.128 9628.751 - 9679.163: 65.0963% ( 101) 00:07:54.128 9679.163 - 9729.575: 65.7355% ( 81) 00:07:54.128 9729.575 - 9779.988: 66.4457% ( 90) 00:07:54.128 9779.988 - 9830.400: 66.8640% ( 53) 00:07:54.128 9830.400 - 9880.812: 67.4953% ( 80) 00:07:54.128 9880.812 - 9931.225: 67.9135% ( 53) 00:07:54.128 9931.225 - 9981.637: 68.2371% ( 41) 00:07:54.128 9981.637 - 10032.049: 68.6237% ( 49) 00:07:54.128 10032.049 - 10082.462: 69.0025% ( 48) 00:07:54.128 10082.462 - 10132.874: 69.2629% ( 33) 00:07:54.128 10132.874 - 10183.286: 69.4681% ( 26) 00:07:54.128 10183.286 - 10233.698: 69.8469% ( 48) 00:07:54.128 10233.698 - 10284.111: 70.0836% ( 30) 00:07:54.128 10284.111 - 10334.523: 70.3441% ( 33) 00:07:54.128 10334.523 - 10384.935: 70.5019% ( 20) 00:07:54.128 10384.935 - 10435.348: 70.6439% ( 18) 00:07:54.128 10435.348 - 10485.760: 70.8254% ( 23) 00:07:54.128 10485.760 - 10536.172: 71.0780% ( 32) 00:07:54.128 10536.172 - 10586.585: 71.2910% ( 27) 00:07:54.128 10586.585 - 10636.997: 71.4410% ( 19) 00:07:54.128 10636.997 - 10687.409: 71.6067% ( 21) 00:07:54.128 10687.409 - 10737.822: 71.9145% ( 39) 00:07:54.128 10737.822 - 10788.234: 72.2064% ( 37) 00:07:54.128 10788.234 - 10838.646: 72.3958% ( 24) 00:07:54.128 10838.646 - 10889.058: 72.6878% ( 37) 00:07:54.128 10889.058 - 10939.471: 72.9009% ( 27) 00:07:54.128 10939.471 - 10989.883: 73.1140% ( 27) 00:07:54.128 10989.883 - 11040.295: 73.3033% ( 24) 00:07:54.128 11040.295 - 11090.708: 73.5164% ( 27) 00:07:54.128 11090.708 - 11141.120: 73.7768% ( 33) 00:07:54.128 11141.120 - 11191.532: 74.0294% ( 32) 00:07:54.128 11191.532 - 11241.945: 74.2582% ( 29) 00:07:54.128 11241.945 - 11292.357: 74.6528% ( 50) 00:07:54.128 11292.357 - 11342.769: 74.8658% ( 27) 00:07:54.128 11342.769 - 11393.182: 75.0552% ( 24) 00:07:54.128 11393.182 - 11443.594: 75.2999% ( 31) 00:07:54.128 11443.594 - 11494.006: 75.5287% ( 29) 00:07:54.128 11494.006 - 11544.418: 75.7023% ( 22) 00:07:54.128 11544.418 - 11594.831: 75.8602% ( 20) 00:07:54.128 11594.831 - 11645.243: 76.0101% ( 19) 00:07:54.128 11645.243 - 11695.655: 76.1758% ( 21) 00:07:54.128 11695.655 - 11746.068: 76.3336% ( 20) 00:07:54.128 11746.068 - 11796.480: 76.4362% ( 13) 00:07:54.128 11796.480 - 11846.892: 76.5783% ( 18) 00:07:54.128 11846.892 - 11897.305: 76.6888% ( 14) 00:07:54.128 11897.305 - 11947.717: 76.8860% ( 25) 00:07:54.128 11947.717 - 11998.129: 77.0123% ( 16) 00:07:54.128 11998.129 - 12048.542: 77.1307% ( 15) 00:07:54.128 12048.542 - 12098.954: 77.2333% ( 13) 00:07:54.128 12098.954 - 12149.366: 77.4148% ( 23) 00:07:54.128 12149.366 - 12199.778: 77.5489% ( 17) 00:07:54.128 12199.778 - 12250.191: 77.7068% ( 20) 00:07:54.128 12250.191 - 12300.603: 77.8093% ( 13) 00:07:54.128 12300.603 - 12351.015: 77.9119% ( 13) 00:07:54.128 12351.015 - 12401.428: 78.0303% ( 15) 00:07:54.128 12401.428 - 12451.840: 78.1329% ( 13) 00:07:54.128 12451.840 - 12502.252: 78.2749% ( 18) 00:07:54.128 12502.252 - 12552.665: 78.4170% ( 18) 00:07:54.128 12552.665 - 12603.077: 78.5038% ( 11) 00:07:54.128 12603.077 - 12653.489: 78.5748% ( 9) 00:07:54.128 12653.489 - 12703.902: 78.6695% ( 12) 00:07:54.128 12703.902 - 12754.314: 78.7879% ( 15) 00:07:54.128 12754.314 - 12804.726: 78.9141% ( 16) 00:07:54.128 12804.726 - 12855.138: 79.0483% ( 17) 00:07:54.128 12855.138 - 12905.551: 79.3008% ( 32) 00:07:54.128 12905.551 - 13006.375: 79.6796% ( 48) 00:07:54.128 13006.375 - 13107.200: 80.0110% ( 42) 00:07:54.128 13107.200 - 13208.025: 80.3504% ( 43) 00:07:54.128 13208.025 - 13308.849: 80.6424% ( 37) 00:07:54.128 13308.849 - 13409.674: 81.0054% ( 46) 00:07:54.128 13409.674 - 13510.498: 81.2579% ( 32) 00:07:54.128 13510.498 - 13611.323: 81.7708% ( 65) 00:07:54.128 13611.323 - 13712.148: 82.2680% ( 63) 00:07:54.128 13712.148 - 13812.972: 82.8835% ( 78) 00:07:54.128 13812.972 - 13913.797: 83.3807% ( 63) 00:07:54.128 13913.797 - 14014.622: 84.0357% ( 83) 00:07:54.128 14014.622 - 14115.446: 84.7301% ( 88) 00:07:54.128 14115.446 - 14216.271: 85.2746% ( 69) 00:07:54.128 14216.271 - 14317.095: 85.8507% ( 73) 00:07:54.128 14317.095 - 14417.920: 86.4347% ( 74) 00:07:54.128 14417.920 - 14518.745: 86.9555% ( 66) 00:07:54.128 14518.745 - 14619.569: 87.5710% ( 78) 00:07:54.128 14619.569 - 14720.394: 87.9656% ( 50) 00:07:54.128 14720.394 - 14821.218: 88.3838% ( 53) 00:07:54.128 14821.218 - 14922.043: 88.7863% ( 51) 00:07:54.128 14922.043 - 15022.868: 89.2203% ( 55) 00:07:54.128 15022.868 - 15123.692: 89.6070% ( 49) 00:07:54.128 15123.692 - 15224.517: 89.9779% ( 47) 00:07:54.128 15224.517 - 15325.342: 90.3409% ( 46) 00:07:54.128 15325.342 - 15426.166: 90.6960% ( 45) 00:07:54.128 15426.166 - 15526.991: 91.1064% ( 52) 00:07:54.128 15526.991 - 15627.815: 91.4773% ( 47) 00:07:54.128 15627.815 - 15728.640: 91.7850% ( 39) 00:07:54.128 15728.640 - 15829.465: 92.1480% ( 46) 00:07:54.128 15829.465 - 15930.289: 92.4558% ( 39) 00:07:54.128 15930.289 - 16031.114: 92.8977% ( 56) 00:07:54.128 16031.114 - 16131.938: 93.3002% ( 51) 00:07:54.128 16131.938 - 16232.763: 93.7737% ( 60) 00:07:54.128 16232.763 - 16333.588: 94.2235% ( 57) 00:07:54.128 16333.588 - 16434.412: 94.5628% ( 43) 00:07:54.128 16434.412 - 16535.237: 94.8232% ( 33) 00:07:54.128 16535.237 - 16636.062: 95.0126% ( 24) 00:07:54.128 16636.062 - 16736.886: 95.3046% ( 37) 00:07:54.128 16736.886 - 16837.711: 95.6203% ( 40) 00:07:54.128 16837.711 - 16938.535: 95.8807% ( 33) 00:07:54.128 16938.535 - 17039.360: 96.1411% ( 33) 00:07:54.128 17039.360 - 17140.185: 96.3305% ( 24) 00:07:54.128 17140.185 - 17241.009: 96.5672% ( 30) 00:07:54.128 17241.009 - 17341.834: 96.7882% ( 28) 00:07:54.128 17341.834 - 17442.658: 96.9934% ( 26) 00:07:54.128 17442.658 - 17543.483: 97.2222% ( 29) 00:07:54.128 17543.483 - 17644.308: 97.4274% ( 26) 00:07:54.128 17644.308 - 17745.132: 97.6720% ( 31) 00:07:54.128 17745.132 - 17845.957: 97.8299% ( 20) 00:07:54.128 17845.957 - 17946.782: 97.9798% ( 19) 00:07:54.128 17946.782 - 18047.606: 98.1297% ( 19) 00:07:54.128 18047.606 - 18148.431: 98.2955% ( 21) 00:07:54.128 18148.431 - 18249.255: 98.4375% ( 18) 00:07:54.128 18249.255 - 18350.080: 98.5638% ( 16) 00:07:54.128 18350.080 - 18450.905: 98.6585% ( 12) 00:07:54.128 18450.905 - 18551.729: 98.7374% ( 10) 00:07:54.128 18551.729 - 18652.554: 98.8084% ( 9) 00:07:54.128 18652.554 - 18753.378: 98.8557% ( 6) 00:07:54.128 18753.378 - 18854.203: 98.8952% ( 5) 00:07:54.128 18854.203 - 18955.028: 98.9504% ( 7) 00:07:54.128 18955.028 - 19055.852: 98.9899% ( 5) 00:07:54.128 19257.502 - 19358.326: 99.0057% ( 2) 00:07:54.128 19358.326 - 19459.151: 99.0294% ( 3) 00:07:54.128 19459.151 - 19559.975: 99.0609% ( 4) 00:07:54.129 19559.975 - 19660.800: 99.0925% ( 4) 00:07:54.129 19660.800 - 19761.625: 99.1319% ( 5) 00:07:54.129 19761.625 - 19862.449: 99.1714% ( 5) 00:07:54.129 19862.449 - 19963.274: 99.2030% ( 4) 00:07:54.129 19963.274 - 20064.098: 99.2503% ( 6) 00:07:54.129 20064.098 - 20164.923: 99.2819% ( 4) 00:07:54.129 20164.923 - 20265.748: 99.3134% ( 4) 00:07:54.129 20265.748 - 20366.572: 99.3766% ( 8) 00:07:54.129 20366.572 - 20467.397: 99.4081% ( 4) 00:07:54.129 20467.397 - 20568.222: 99.4397% ( 4) 00:07:54.129 20568.222 - 20669.046: 99.4713% ( 4) 00:07:54.129 20669.046 - 20769.871: 99.4949% ( 3) 00:07:54.129 21475.643 - 21576.468: 99.5107% ( 2) 00:07:54.129 21576.468 - 21677.292: 99.5265% ( 2) 00:07:54.129 21677.292 - 21778.117: 99.5423% ( 2) 00:07:54.129 21778.117 - 21878.942: 99.5581% ( 2) 00:07:54.129 21878.942 - 21979.766: 99.5660% ( 1) 00:07:54.129 22584.714 - 22685.538: 99.5896% ( 3) 00:07:54.129 22685.538 - 22786.363: 99.6133% ( 3) 00:07:54.129 22786.363 - 22887.188: 99.6370% ( 3) 00:07:54.129 22887.188 - 22988.012: 99.6686% ( 4) 00:07:54.129 22988.012 - 23088.837: 99.6843% ( 2) 00:07:54.129 23088.837 - 23189.662: 99.7080% ( 3) 00:07:54.129 23189.662 - 23290.486: 99.7396% ( 4) 00:07:54.129 23290.486 - 23391.311: 99.7633% ( 3) 00:07:54.129 23391.311 - 23492.135: 99.7948% ( 4) 00:07:54.129 23492.135 - 23592.960: 99.8185% ( 3) 00:07:54.129 23592.960 - 23693.785: 99.8422% ( 3) 00:07:54.129 23693.785 - 23794.609: 99.8658% ( 3) 00:07:54.129 23794.609 - 23895.434: 99.8974% ( 4) 00:07:54.129 23895.434 - 23996.258: 99.9290% ( 4) 00:07:54.129 23996.258 - 24097.083: 99.9527% ( 3) 00:07:54.129 24097.083 - 24197.908: 99.9842% ( 4) 00:07:54.129 24197.908 - 24298.732: 100.0000% ( 2) 00:07:54.129 00:07:54.129 09:01:32 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:54.129 00:07:54.129 real 0m2.583s 00:07:54.129 user 0m2.255s 00:07:54.129 sys 0m0.223s 00:07:54.129 09:01:32 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.129 ************************************ 00:07:54.129 END TEST nvme_perf 00:07:54.129 ************************************ 00:07:54.129 09:01:32 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:54.129 09:01:32 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.129 09:01:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:54.129 09:01:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.129 09:01:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.129 ************************************ 00:07:54.129 START TEST nvme_hello_world 00:07:54.129 ************************************ 00:07:54.129 09:01:32 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:54.129 Initializing NVMe Controllers 00:07:54.129 Attached to 0000:00:10.0 00:07:54.129 Namespace ID: 1 size: 6GB 00:07:54.129 Attached to 0000:00:11.0 00:07:54.129 Namespace ID: 1 size: 5GB 00:07:54.129 Attached to 0000:00:13.0 00:07:54.129 Namespace ID: 1 size: 1GB 00:07:54.129 Attached to 0000:00:12.0 00:07:54.129 Namespace ID: 1 size: 4GB 00:07:54.129 Namespace ID: 2 size: 4GB 00:07:54.129 Namespace ID: 3 size: 4GB 00:07:54.129 Initialization complete. 00:07:54.129 INFO: using host memory buffer for IO 00:07:54.129 Hello world! 00:07:54.129 INFO: using host memory buffer for IO 00:07:54.129 Hello world! 00:07:54.129 INFO: using host memory buffer for IO 00:07:54.129 Hello world! 00:07:54.129 INFO: using host memory buffer for IO 00:07:54.129 Hello world! 00:07:54.129 INFO: using host memory buffer for IO 00:07:54.129 Hello world! 00:07:54.129 INFO: using host memory buffer for IO 00:07:54.129 Hello world! 00:07:54.129 00:07:54.129 real 0m0.254s 00:07:54.129 user 0m0.094s 00:07:54.129 sys 0m0.099s 00:07:54.129 09:01:32 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.129 09:01:32 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:54.129 ************************************ 00:07:54.129 END TEST nvme_hello_world 00:07:54.129 ************************************ 00:07:54.129 09:01:33 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:54.129 09:01:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.129 09:01:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.129 09:01:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.129 ************************************ 00:07:54.129 START TEST nvme_sgl 00:07:54.129 ************************************ 00:07:54.129 09:01:33 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:54.391 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:54.391 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:54.391 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:54.391 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:54.391 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:54.391 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:54.391 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:54.391 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:54.391 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:54.391 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:54.391 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:54.391 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:54.391 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:54.391 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:54.391 NVMe Readv/Writev Request test 00:07:54.391 Attached to 0000:00:10.0 00:07:54.391 Attached to 0000:00:11.0 00:07:54.391 Attached to 0000:00:13.0 00:07:54.391 Attached to 0000:00:12.0 00:07:54.391 0000:00:10.0: build_io_request_2 test passed 00:07:54.391 0000:00:10.0: build_io_request_4 test passed 00:07:54.391 0000:00:10.0: build_io_request_5 test passed 00:07:54.391 0000:00:10.0: build_io_request_6 test passed 00:07:54.391 0000:00:10.0: build_io_request_7 test passed 00:07:54.391 0000:00:10.0: build_io_request_10 test passed 00:07:54.391 0000:00:11.0: build_io_request_2 test passed 00:07:54.391 0000:00:11.0: build_io_request_4 test passed 00:07:54.391 0000:00:11.0: build_io_request_5 test passed 00:07:54.391 0000:00:11.0: build_io_request_6 test passed 00:07:54.391 0000:00:11.0: build_io_request_7 test passed 00:07:54.391 0000:00:11.0: build_io_request_10 test passed 00:07:54.391 Cleaning up... 00:07:54.651 00:07:54.651 real 0m0.299s 00:07:54.651 user 0m0.145s 00:07:54.651 sys 0m0.106s 00:07:54.651 09:01:33 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.651 ************************************ 00:07:54.651 END TEST nvme_sgl 00:07:54.651 ************************************ 00:07:54.651 09:01:33 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:54.651 09:01:33 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:54.651 09:01:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.651 09:01:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.651 09:01:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.651 ************************************ 00:07:54.651 START TEST nvme_e2edp 00:07:54.651 ************************************ 00:07:54.651 09:01:33 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:54.909 NVMe Write/Read with End-to-End data protection test 00:07:54.909 Attached to 0000:00:10.0 00:07:54.909 Attached to 0000:00:11.0 00:07:54.909 Attached to 0000:00:13.0 00:07:54.909 Attached to 0000:00:12.0 00:07:54.909 Cleaning up... 00:07:54.909 00:07:54.909 real 0m0.242s 00:07:54.909 user 0m0.080s 00:07:54.909 sys 0m0.112s 00:07:54.909 ************************************ 00:07:54.909 END TEST nvme_e2edp 00:07:54.909 09:01:33 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.909 09:01:33 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:54.909 ************************************ 00:07:54.909 09:01:33 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:54.909 09:01:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:54.909 09:01:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.909 09:01:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.909 ************************************ 00:07:54.909 START TEST nvme_reserve 00:07:54.909 ************************************ 00:07:54.909 09:01:33 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:55.167 ===================================================== 00:07:55.167 NVMe Controller at PCI bus 0, device 16, function 0 00:07:55.167 ===================================================== 00:07:55.167 Reservations: Not Supported 00:07:55.167 ===================================================== 00:07:55.167 NVMe Controller at PCI bus 0, device 17, function 0 00:07:55.167 ===================================================== 00:07:55.167 Reservations: Not Supported 00:07:55.167 ===================================================== 00:07:55.167 NVMe Controller at PCI bus 0, device 19, function 0 00:07:55.167 ===================================================== 00:07:55.167 Reservations: Not Supported 00:07:55.167 ===================================================== 00:07:55.167 NVMe Controller at PCI bus 0, device 18, function 0 00:07:55.167 ===================================================== 00:07:55.167 Reservations: Not Supported 00:07:55.167 Reservation test passed 00:07:55.167 00:07:55.167 real 0m0.211s 00:07:55.167 user 0m0.076s 00:07:55.167 sys 0m0.096s 00:07:55.167 09:01:33 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.167 09:01:33 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 ************************************ 00:07:55.167 END TEST nvme_reserve 00:07:55.167 ************************************ 00:07:55.167 09:01:33 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:55.167 09:01:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.167 09:01:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.167 09:01:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.167 ************************************ 00:07:55.167 START TEST nvme_err_injection 00:07:55.167 ************************************ 00:07:55.167 09:01:33 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:55.426 NVMe Error Injection test 00:07:55.426 Attached to 0000:00:10.0 00:07:55.426 Attached to 0000:00:11.0 00:07:55.426 Attached to 0000:00:13.0 00:07:55.426 Attached to 0000:00:12.0 00:07:55.426 0000:00:12.0: get features failed as expected 00:07:55.426 0000:00:10.0: get features failed as expected 00:07:55.426 0000:00:11.0: get features failed as expected 00:07:55.426 0000:00:13.0: get features failed as expected 00:07:55.426 0000:00:10.0: get features successfully as expected 00:07:55.426 0000:00:11.0: get features successfully as expected 00:07:55.426 0000:00:13.0: get features successfully as expected 00:07:55.426 0000:00:12.0: get features successfully as expected 00:07:55.426 0000:00:10.0: read failed as expected 00:07:55.426 0000:00:11.0: read failed as expected 00:07:55.427 0000:00:13.0: read failed as expected 00:07:55.427 0000:00:12.0: read failed as expected 00:07:55.427 0000:00:10.0: read successfully as expected 00:07:55.427 0000:00:11.0: read successfully as expected 00:07:55.427 0000:00:13.0: read successfully as expected 00:07:55.427 0000:00:12.0: read successfully as expected 00:07:55.427 Cleaning up... 00:07:55.427 00:07:55.427 real 0m0.244s 00:07:55.427 user 0m0.095s 00:07:55.427 sys 0m0.106s 00:07:55.427 09:01:34 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.427 09:01:34 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:55.427 ************************************ 00:07:55.427 END TEST nvme_err_injection 00:07:55.427 ************************************ 00:07:55.427 09:01:34 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:55.427 09:01:34 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:55.427 09:01:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.427 09:01:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.427 ************************************ 00:07:55.427 START TEST nvme_overhead 00:07:55.427 ************************************ 00:07:55.427 09:01:34 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:56.800 Initializing NVMe Controllers 00:07:56.800 Attached to 0000:00:10.0 00:07:56.800 Attached to 0000:00:11.0 00:07:56.800 Attached to 0000:00:13.0 00:07:56.800 Attached to 0000:00:12.0 00:07:56.800 Initialization complete. Launching workers. 00:07:56.800 submit (in ns) avg, min, max = 11610.3, 10685.4, 84585.4 00:07:56.800 complete (in ns) avg, min, max = 7785.2, 7222.3, 220495.4 00:07:56.800 00:07:56.800 Submit histogram 00:07:56.800 ================ 00:07:56.800 Range in us Cumulative Count 00:07:56.800 10.683 - 10.732: 0.0206% ( 3) 00:07:56.800 10.732 - 10.782: 0.1238% ( 15) 00:07:56.800 10.782 - 10.831: 0.5640% ( 64) 00:07:56.800 10.831 - 10.880: 1.3755% ( 118) 00:07:56.800 10.880 - 10.929: 2.7098% ( 194) 00:07:56.800 10.929 - 10.978: 5.2407% ( 368) 00:07:56.800 10.978 - 11.028: 8.9684% ( 542) 00:07:56.800 11.028 - 11.077: 14.2710% ( 771) 00:07:56.800 11.077 - 11.126: 21.5818% ( 1063) 00:07:56.800 11.126 - 11.175: 30.0757% ( 1235) 00:07:56.800 11.175 - 11.225: 39.2435% ( 1333) 00:07:56.800 11.225 - 11.274: 47.7304% ( 1234) 00:07:56.800 11.274 - 11.323: 55.1032% ( 1072) 00:07:56.800 11.323 - 11.372: 60.8047% ( 829) 00:07:56.800 11.372 - 11.422: 65.2613% ( 648) 00:07:56.800 11.422 - 11.471: 68.7001% ( 500) 00:07:56.800 11.471 - 11.520: 71.3411% ( 384) 00:07:56.800 11.520 - 11.569: 73.8102% ( 359) 00:07:56.800 11.569 - 11.618: 75.9078% ( 305) 00:07:56.800 11.618 - 11.668: 77.7854% ( 273) 00:07:56.800 11.668 - 11.717: 79.6011% ( 264) 00:07:56.800 11.717 - 11.766: 81.1554% ( 226) 00:07:56.800 11.766 - 11.815: 82.8267% ( 243) 00:07:56.800 11.815 - 11.865: 84.3466% ( 221) 00:07:56.800 11.865 - 11.914: 86.0935% ( 254) 00:07:56.800 11.914 - 11.963: 87.5103% ( 206) 00:07:56.800 11.963 - 12.012: 88.9546% ( 210) 00:07:56.800 12.012 - 12.062: 90.3714% ( 206) 00:07:56.800 12.062 - 12.111: 91.5199% ( 167) 00:07:56.800 12.111 - 12.160: 92.5103% ( 144) 00:07:56.800 12.160 - 12.209: 93.3356% ( 120) 00:07:56.800 12.209 - 12.258: 94.0509% ( 104) 00:07:56.800 12.258 - 12.308: 94.5186% ( 68) 00:07:56.800 12.308 - 12.357: 94.9037% ( 56) 00:07:56.800 12.357 - 12.406: 95.2545% ( 51) 00:07:56.800 12.406 - 12.455: 95.5433% ( 42) 00:07:56.800 12.455 - 12.505: 95.7290% ( 27) 00:07:56.801 12.505 - 12.554: 95.8941% ( 24) 00:07:56.801 12.554 - 12.603: 95.9560% ( 9) 00:07:56.801 12.603 - 12.702: 96.1073% ( 22) 00:07:56.801 12.702 - 12.800: 96.2105% ( 15) 00:07:56.801 12.800 - 12.898: 96.2861% ( 11) 00:07:56.801 12.898 - 12.997: 96.3274% ( 6) 00:07:56.801 12.997 - 13.095: 96.3824% ( 8) 00:07:56.801 13.095 - 13.194: 96.4512% ( 10) 00:07:56.801 13.194 - 13.292: 96.5681% ( 17) 00:07:56.801 13.292 - 13.391: 96.6437% ( 11) 00:07:56.801 13.391 - 13.489: 96.7125% ( 10) 00:07:56.801 13.489 - 13.588: 96.7538% ( 6) 00:07:56.801 13.588 - 13.686: 96.8157% ( 9) 00:07:56.801 13.686 - 13.785: 96.9395% ( 18) 00:07:56.801 13.785 - 13.883: 97.0702% ( 19) 00:07:56.801 13.883 - 13.982: 97.1458% ( 11) 00:07:56.801 13.982 - 14.080: 97.2421% ( 14) 00:07:56.801 14.080 - 14.178: 97.3590% ( 17) 00:07:56.801 14.178 - 14.277: 97.4140% ( 8) 00:07:56.801 14.277 - 14.375: 97.4759% ( 9) 00:07:56.801 14.375 - 14.474: 97.5378% ( 9) 00:07:56.801 14.474 - 14.572: 97.6341% ( 14) 00:07:56.801 14.572 - 14.671: 97.6685% ( 5) 00:07:56.801 14.671 - 14.769: 97.7166% ( 7) 00:07:56.801 14.769 - 14.868: 97.7373% ( 3) 00:07:56.801 14.868 - 14.966: 97.8061% ( 10) 00:07:56.801 14.966 - 15.065: 97.8542% ( 7) 00:07:56.801 15.065 - 15.163: 97.8886% ( 5) 00:07:56.801 15.163 - 15.262: 97.9023% ( 2) 00:07:56.801 15.262 - 15.360: 97.9642% ( 9) 00:07:56.801 15.360 - 15.458: 97.9849% ( 3) 00:07:56.801 15.557 - 15.655: 98.0193% ( 5) 00:07:56.801 15.655 - 15.754: 98.0605% ( 6) 00:07:56.801 15.754 - 15.852: 98.0674% ( 1) 00:07:56.801 15.852 - 15.951: 98.1018% ( 5) 00:07:56.801 15.951 - 16.049: 98.1362% ( 5) 00:07:56.801 16.049 - 16.148: 98.1637% ( 4) 00:07:56.801 16.148 - 16.246: 98.1706% ( 1) 00:07:56.801 16.246 - 16.345: 98.2050% ( 5) 00:07:56.801 16.345 - 16.443: 98.2325% ( 4) 00:07:56.801 16.443 - 16.542: 98.2393% ( 1) 00:07:56.801 16.542 - 16.640: 98.2600% ( 3) 00:07:56.801 16.640 - 16.738: 98.3287% ( 10) 00:07:56.801 16.738 - 16.837: 98.3906% ( 9) 00:07:56.801 16.837 - 16.935: 98.4388% ( 7) 00:07:56.801 16.935 - 17.034: 98.5213% ( 12) 00:07:56.801 17.034 - 17.132: 98.5488% ( 4) 00:07:56.801 17.132 - 17.231: 98.5901% ( 6) 00:07:56.801 17.231 - 17.329: 98.6520% ( 9) 00:07:56.801 17.329 - 17.428: 98.6726% ( 3) 00:07:56.801 17.428 - 17.526: 98.7276% ( 8) 00:07:56.801 17.526 - 17.625: 98.7827% ( 8) 00:07:56.801 17.625 - 17.723: 98.8171% ( 5) 00:07:56.801 17.723 - 17.822: 98.8583% ( 6) 00:07:56.801 17.822 - 17.920: 98.8927% ( 5) 00:07:56.801 17.920 - 18.018: 98.9202% ( 4) 00:07:56.801 18.018 - 18.117: 98.9890% ( 10) 00:07:56.801 18.117 - 18.215: 99.0303% ( 6) 00:07:56.801 18.215 - 18.314: 99.0853% ( 8) 00:07:56.801 18.314 - 18.412: 99.1678% ( 12) 00:07:56.801 18.412 - 18.511: 99.2366% ( 10) 00:07:56.801 18.511 - 18.609: 99.2779% ( 6) 00:07:56.801 18.609 - 18.708: 99.3191% ( 6) 00:07:56.801 18.708 - 18.806: 99.3535% ( 5) 00:07:56.801 18.806 - 18.905: 99.3879% ( 5) 00:07:56.801 18.905 - 19.003: 99.4085% ( 3) 00:07:56.801 19.003 - 19.102: 99.4429% ( 5) 00:07:56.801 19.200 - 19.298: 99.4842% ( 6) 00:07:56.801 19.298 - 19.397: 99.5117% ( 4) 00:07:56.801 19.397 - 19.495: 99.5186% ( 1) 00:07:56.801 19.495 - 19.594: 99.5254% ( 1) 00:07:56.801 19.594 - 19.692: 99.5392% ( 2) 00:07:56.801 19.692 - 19.791: 99.5530% ( 2) 00:07:56.801 19.791 - 19.889: 99.5598% ( 1) 00:07:56.801 19.889 - 19.988: 99.5873% ( 4) 00:07:56.801 20.086 - 20.185: 99.5942% ( 1) 00:07:56.801 20.283 - 20.382: 99.6011% ( 1) 00:07:56.801 20.382 - 20.480: 99.6149% ( 2) 00:07:56.801 20.480 - 20.578: 99.6217% ( 1) 00:07:56.801 20.578 - 20.677: 99.6286% ( 1) 00:07:56.801 20.677 - 20.775: 99.6424% ( 2) 00:07:56.801 20.972 - 21.071: 99.6492% ( 1) 00:07:56.801 21.268 - 21.366: 99.6561% ( 1) 00:07:56.801 21.366 - 21.465: 99.6699% ( 2) 00:07:56.801 21.465 - 21.563: 99.6768% ( 1) 00:07:56.801 22.154 - 22.252: 99.6836% ( 1) 00:07:56.801 22.252 - 22.351: 99.6974% ( 2) 00:07:56.801 22.548 - 22.646: 99.7111% ( 2) 00:07:56.801 22.745 - 22.843: 99.7180% ( 1) 00:07:56.801 23.040 - 23.138: 99.7249% ( 1) 00:07:56.801 23.138 - 23.237: 99.7318% ( 1) 00:07:56.801 23.237 - 23.335: 99.7387% ( 1) 00:07:56.801 23.335 - 23.434: 99.7524% ( 2) 00:07:56.801 23.434 - 23.532: 99.7593% ( 1) 00:07:56.801 23.532 - 23.631: 99.7662% ( 1) 00:07:56.801 23.729 - 23.828: 99.7730% ( 1) 00:07:56.801 24.025 - 24.123: 99.7799% ( 1) 00:07:56.801 24.123 - 24.222: 99.7868% ( 1) 00:07:56.801 24.714 - 24.812: 99.7937% ( 1) 00:07:56.801 24.911 - 25.009: 99.8074% ( 2) 00:07:56.801 25.009 - 25.108: 99.8143% ( 1) 00:07:56.801 25.108 - 25.206: 99.8281% ( 2) 00:07:56.801 25.206 - 25.403: 99.8349% ( 1) 00:07:56.801 25.600 - 25.797: 99.8487% ( 2) 00:07:56.801 27.372 - 27.569: 99.8556% ( 1) 00:07:56.801 28.160 - 28.357: 99.8624% ( 1) 00:07:56.801 28.751 - 28.948: 99.8693% ( 1) 00:07:56.801 31.311 - 31.508: 99.8762% ( 1) 00:07:56.801 31.902 - 32.098: 99.8831% ( 1) 00:07:56.801 32.492 - 32.689: 99.8900% ( 1) 00:07:56.801 35.249 - 35.446: 99.8968% ( 1) 00:07:56.801 36.628 - 36.825: 99.9037% ( 1) 00:07:56.801 36.825 - 37.022: 99.9106% ( 1) 00:07:56.801 37.218 - 37.415: 99.9175% ( 1) 00:07:56.801 38.203 - 38.400: 99.9243% ( 1) 00:07:56.801 41.157 - 41.354: 99.9312% ( 1) 00:07:56.801 43.520 - 43.717: 99.9381% ( 1) 00:07:56.801 47.262 - 47.458: 99.9450% ( 1) 00:07:56.801 50.018 - 50.215: 99.9519% ( 1) 00:07:56.801 55.138 - 55.532: 99.9587% ( 1) 00:07:56.801 57.502 - 57.895: 99.9656% ( 1) 00:07:56.801 68.135 - 68.529: 99.9725% ( 1) 00:07:56.801 76.012 - 76.406: 99.9794% ( 1) 00:07:56.801 77.194 - 77.588: 99.9862% ( 1) 00:07:56.801 83.889 - 84.283: 99.9931% ( 1) 00:07:56.801 84.283 - 84.677: 100.0000% ( 1) 00:07:56.801 00:07:56.801 Complete histogram 00:07:56.801 ================== 00:07:56.801 Range in us Cumulative Count 00:07:56.801 7.188 - 7.237: 0.0206% ( 3) 00:07:56.801 7.237 - 7.286: 0.2545% ( 34) 00:07:56.801 7.286 - 7.335: 1.9876% ( 252) 00:07:56.801 7.335 - 7.385: 8.6451% ( 968) 00:07:56.801 7.385 - 7.434: 19.6355% ( 1598) 00:07:56.801 7.434 - 7.483: 32.5585% ( 1879) 00:07:56.801 7.483 - 7.532: 45.6809% ( 1908) 00:07:56.801 7.532 - 7.582: 56.8432% ( 1623) 00:07:56.801 7.582 - 7.631: 65.4608% ( 1253) 00:07:56.801 7.631 - 7.680: 72.6891% ( 1051) 00:07:56.801 7.680 - 7.729: 78.0949% ( 786) 00:07:56.801 7.729 - 7.778: 82.2077% ( 598) 00:07:56.801 7.778 - 7.828: 85.3026% ( 450) 00:07:56.801 7.828 - 7.877: 87.8817% ( 375) 00:07:56.801 7.877 - 7.926: 89.7043% ( 265) 00:07:56.801 7.926 - 7.975: 91.0454% ( 195) 00:07:56.801 7.975 - 8.025: 92.1320% ( 158) 00:07:56.801 8.025 - 8.074: 93.0880% ( 139) 00:07:56.801 8.074 - 8.123: 93.7552% ( 97) 00:07:56.801 8.123 - 8.172: 94.3466% ( 86) 00:07:56.801 8.172 - 8.222: 94.7937% ( 65) 00:07:56.801 8.222 - 8.271: 95.1169% ( 47) 00:07:56.801 8.271 - 8.320: 95.4470% ( 48) 00:07:56.801 8.320 - 8.369: 95.7290% ( 41) 00:07:56.801 8.369 - 8.418: 96.0660% ( 49) 00:07:56.801 8.418 - 8.468: 96.2655% ( 29) 00:07:56.801 8.468 - 8.517: 96.4443% ( 26) 00:07:56.801 8.517 - 8.566: 96.6506% ( 30) 00:07:56.801 8.566 - 8.615: 96.8638% ( 31) 00:07:56.801 8.615 - 8.665: 97.0014% ( 20) 00:07:56.801 8.665 - 8.714: 97.0977% ( 14) 00:07:56.801 8.714 - 8.763: 97.1664% ( 10) 00:07:56.801 8.763 - 8.812: 97.2215% ( 8) 00:07:56.801 8.812 - 8.862: 97.3177% ( 14) 00:07:56.801 8.862 - 8.911: 97.3796% ( 9) 00:07:56.801 8.911 - 8.960: 97.4484% ( 10) 00:07:56.801 8.960 - 9.009: 97.4759% ( 4) 00:07:56.801 9.009 - 9.058: 97.5378% ( 9) 00:07:56.801 9.058 - 9.108: 97.5653% ( 4) 00:07:56.801 9.108 - 9.157: 97.5928% ( 4) 00:07:56.801 9.157 - 9.206: 97.6204% ( 4) 00:07:56.801 9.206 - 9.255: 97.6341% ( 2) 00:07:56.801 9.255 - 9.305: 97.6685% ( 5) 00:07:56.801 9.305 - 9.354: 97.6891% ( 3) 00:07:56.801 9.354 - 9.403: 97.6960% ( 1) 00:07:56.801 9.403 - 9.452: 97.7373% ( 6) 00:07:56.801 9.452 - 9.502: 97.7510% ( 2) 00:07:56.801 9.502 - 9.551: 97.7717% ( 3) 00:07:56.801 9.551 - 9.600: 97.7854% ( 2) 00:07:56.801 9.649 - 9.698: 97.7923% ( 1) 00:07:56.801 9.698 - 9.748: 97.7992% ( 1) 00:07:56.801 9.748 - 9.797: 97.8061% ( 1) 00:07:56.801 9.797 - 9.846: 97.8267% ( 3) 00:07:56.801 9.846 - 9.895: 97.8404% ( 2) 00:07:56.801 9.945 - 9.994: 97.8611% ( 3) 00:07:56.801 9.994 - 10.043: 97.8748% ( 2) 00:07:56.801 10.092 - 10.142: 97.8817% ( 1) 00:07:56.802 10.142 - 10.191: 97.9161% ( 5) 00:07:56.802 10.191 - 10.240: 97.9298% ( 2) 00:07:56.802 10.240 - 10.289: 97.9367% ( 1) 00:07:56.802 10.289 - 10.338: 97.9574% ( 3) 00:07:56.802 10.338 - 10.388: 97.9711% ( 2) 00:07:56.802 10.388 - 10.437: 97.9780% ( 1) 00:07:56.802 10.437 - 10.486: 97.9917% ( 2) 00:07:56.802 10.486 - 10.535: 97.9986% ( 1) 00:07:56.802 10.535 - 10.585: 98.0193% ( 3) 00:07:56.802 10.585 - 10.634: 98.0330% ( 2) 00:07:56.802 10.634 - 10.683: 98.0399% ( 1) 00:07:56.802 10.683 - 10.732: 98.0536% ( 2) 00:07:56.802 10.732 - 10.782: 98.0743% ( 3) 00:07:56.802 10.782 - 10.831: 98.1224% ( 7) 00:07:56.802 10.831 - 10.880: 98.1293% ( 1) 00:07:56.802 10.880 - 10.929: 98.1431% ( 2) 00:07:56.802 10.929 - 10.978: 98.1499% ( 1) 00:07:56.802 10.978 - 11.028: 98.1843% ( 5) 00:07:56.802 11.028 - 11.077: 98.2050% ( 3) 00:07:56.802 11.077 - 11.126: 98.2118% ( 1) 00:07:56.802 11.126 - 11.175: 98.2256% ( 2) 00:07:56.802 11.175 - 11.225: 98.2600% ( 5) 00:07:56.802 11.225 - 11.274: 98.2737% ( 2) 00:07:56.802 11.274 - 11.323: 98.2944% ( 3) 00:07:56.802 11.323 - 11.372: 98.3012% ( 1) 00:07:56.802 11.372 - 11.422: 98.3150% ( 2) 00:07:56.802 11.422 - 11.471: 98.3425% ( 4) 00:07:56.802 11.471 - 11.520: 98.3494% ( 1) 00:07:56.802 11.520 - 11.569: 98.3563% ( 1) 00:07:56.802 11.569 - 11.618: 98.3631% ( 1) 00:07:56.802 11.668 - 11.717: 98.3700% ( 1) 00:07:56.802 11.766 - 11.815: 98.3769% ( 1) 00:07:56.802 11.963 - 12.012: 98.3838% ( 1) 00:07:56.802 12.111 - 12.160: 98.3975% ( 2) 00:07:56.802 12.258 - 12.308: 98.4113% ( 2) 00:07:56.802 12.455 - 12.505: 98.4182% ( 1) 00:07:56.802 12.505 - 12.554: 98.4250% ( 1) 00:07:56.802 12.603 - 12.702: 98.4319% ( 1) 00:07:56.802 12.800 - 12.898: 98.4388% ( 1) 00:07:56.802 12.898 - 12.997: 98.4663% ( 4) 00:07:56.802 12.997 - 13.095: 98.4938% ( 4) 00:07:56.802 13.095 - 13.194: 98.5351% ( 6) 00:07:56.802 13.194 - 13.292: 98.5901% ( 8) 00:07:56.802 13.292 - 13.391: 98.6245% ( 5) 00:07:56.802 13.391 - 13.489: 98.6657% ( 6) 00:07:56.802 13.489 - 13.588: 98.7070% ( 6) 00:07:56.802 13.588 - 13.686: 98.7827% ( 11) 00:07:56.802 13.686 - 13.785: 98.8102% ( 4) 00:07:56.802 13.785 - 13.883: 98.8308% ( 3) 00:07:56.802 13.883 - 13.982: 98.8583% ( 4) 00:07:56.802 13.982 - 14.080: 98.8996% ( 6) 00:07:56.802 14.080 - 14.178: 98.9477% ( 7) 00:07:56.802 14.178 - 14.277: 98.9615% ( 2) 00:07:56.802 14.277 - 14.375: 98.9890% ( 4) 00:07:56.802 14.375 - 14.474: 99.0646% ( 11) 00:07:56.802 14.474 - 14.572: 99.1128% ( 7) 00:07:56.802 14.572 - 14.671: 99.1953% ( 12) 00:07:56.802 14.671 - 14.769: 99.2503% ( 8) 00:07:56.802 14.769 - 14.868: 99.2847% ( 5) 00:07:56.802 14.868 - 14.966: 99.3535% ( 10) 00:07:56.802 14.966 - 15.065: 99.3948% ( 6) 00:07:56.802 15.065 - 15.163: 99.4429% ( 7) 00:07:56.802 15.163 - 15.262: 99.4773% ( 5) 00:07:56.802 15.262 - 15.360: 99.5186% ( 6) 00:07:56.802 15.360 - 15.458: 99.5667% ( 7) 00:07:56.802 15.458 - 15.557: 99.6011% ( 5) 00:07:56.802 15.557 - 15.655: 99.6217% ( 3) 00:07:56.802 15.655 - 15.754: 99.6355% ( 2) 00:07:56.802 15.754 - 15.852: 99.6561% ( 3) 00:07:56.802 15.852 - 15.951: 99.6768% ( 3) 00:07:56.802 15.951 - 16.049: 99.6836% ( 1) 00:07:56.802 16.049 - 16.148: 99.6905% ( 1) 00:07:56.802 16.148 - 16.246: 99.7043% ( 2) 00:07:56.802 16.246 - 16.345: 99.7180% ( 2) 00:07:56.802 16.738 - 16.837: 99.7318% ( 2) 00:07:56.802 16.837 - 16.935: 99.7387% ( 1) 00:07:56.802 16.935 - 17.034: 99.7524% ( 2) 00:07:56.802 17.132 - 17.231: 99.7593% ( 1) 00:07:56.802 17.231 - 17.329: 99.7662% ( 1) 00:07:56.802 17.428 - 17.526: 99.7730% ( 1) 00:07:56.802 17.526 - 17.625: 99.7799% ( 1) 00:07:56.802 17.625 - 17.723: 99.7868% ( 1) 00:07:56.802 17.723 - 17.822: 99.7937% ( 1) 00:07:56.802 18.412 - 18.511: 99.8074% ( 2) 00:07:56.802 18.806 - 18.905: 99.8212% ( 2) 00:07:56.802 18.905 - 19.003: 99.8281% ( 1) 00:07:56.802 19.003 - 19.102: 99.8349% ( 1) 00:07:56.802 19.102 - 19.200: 99.8418% ( 1) 00:07:56.802 19.200 - 19.298: 99.8487% ( 1) 00:07:56.802 19.298 - 19.397: 99.8556% ( 1) 00:07:56.802 19.495 - 19.594: 99.8693% ( 2) 00:07:56.802 19.692 - 19.791: 99.8762% ( 1) 00:07:56.802 19.791 - 19.889: 99.8831% ( 1) 00:07:56.802 20.480 - 20.578: 99.8900% ( 1) 00:07:56.802 20.972 - 21.071: 99.9037% ( 2) 00:07:56.802 21.071 - 21.169: 99.9106% ( 1) 00:07:56.802 22.055 - 22.154: 99.9175% ( 1) 00:07:56.802 22.154 - 22.252: 99.9243% ( 1) 00:07:56.802 23.335 - 23.434: 99.9312% ( 1) 00:07:56.802 23.434 - 23.532: 99.9381% ( 1) 00:07:56.802 25.797 - 25.994: 99.9450% ( 1) 00:07:56.802 33.871 - 34.068: 99.9519% ( 1) 00:07:56.802 36.431 - 36.628: 99.9587% ( 1) 00:07:56.802 49.822 - 50.018: 99.9656% ( 1) 00:07:56.802 51.988 - 52.382: 99.9725% ( 1) 00:07:56.802 53.563 - 53.957: 99.9794% ( 1) 00:07:56.802 108.702 - 109.489: 99.9862% ( 1) 00:07:56.802 113.428 - 114.215: 99.9931% ( 1) 00:07:56.802 218.978 - 220.554: 100.0000% ( 1) 00:07:56.802 00:07:56.802 00:07:56.802 real 0m1.226s 00:07:56.802 user 0m1.064s 00:07:56.802 sys 0m0.113s 00:07:56.802 09:01:35 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.802 09:01:35 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:56.802 ************************************ 00:07:56.802 END TEST nvme_overhead 00:07:56.802 ************************************ 00:07:56.802 09:01:35 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:56.802 09:01:35 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:56.802 09:01:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.802 09:01:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:56.802 ************************************ 00:07:56.802 START TEST nvme_arbitration 00:07:56.802 ************************************ 00:07:56.802 09:01:35 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:00.089 Initializing NVMe Controllers 00:08:00.089 Attached to 0000:00:10.0 00:08:00.089 Attached to 0000:00:11.0 00:08:00.089 Attached to 0000:00:13.0 00:08:00.089 Attached to 0000:00:12.0 00:08:00.089 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:00.089 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:00.089 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:00.089 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:00.089 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:00.089 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:00.089 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:00.089 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:00.089 Initialization complete. Launching workers. 00:08:00.089 Starting thread on core 1 with urgent priority queue 00:08:00.089 Starting thread on core 2 with urgent priority queue 00:08:00.089 Starting thread on core 3 with urgent priority queue 00:08:00.089 Starting thread on core 0 with urgent priority queue 00:08:00.089 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:00.089 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:00.089 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:08:00.089 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:08:00.089 QEMU NVMe Ctrl (12343 ) core 2: 874.67 IO/s 114.33 secs/100000 ios 00:08:00.089 QEMU NVMe Ctrl (12342 ) core 3: 917.33 IO/s 109.01 secs/100000 ios 00:08:00.089 ======================================================== 00:08:00.089 00:08:00.089 00:08:00.089 real 0m3.346s 00:08:00.089 user 0m9.247s 00:08:00.089 sys 0m0.146s 00:08:00.089 09:01:38 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.089 09:01:38 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:00.089 ************************************ 00:08:00.089 END TEST nvme_arbitration 00:08:00.089 ************************************ 00:08:00.089 09:01:38 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:00.089 09:01:38 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:00.089 09:01:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.089 09:01:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.089 ************************************ 00:08:00.089 START TEST nvme_single_aen 00:08:00.089 ************************************ 00:08:00.089 09:01:38 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:00.348 Asynchronous Event Request test 00:08:00.348 Attached to 0000:00:10.0 00:08:00.348 Attached to 0000:00:11.0 00:08:00.348 Attached to 0000:00:13.0 00:08:00.348 Attached to 0000:00:12.0 00:08:00.348 Reset controller to setup AER completions for this process 00:08:00.348 Registering asynchronous event callbacks... 00:08:00.348 Getting orig temperature thresholds of all controllers 00:08:00.348 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.348 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.348 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.348 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:00.348 Setting all controllers temperature threshold low to trigger AER 00:08:00.348 Waiting for all controllers temperature threshold to be set lower 00:08:00.348 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.348 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:00.348 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.348 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:00.348 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.348 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:00.348 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:00.348 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:00.348 Waiting for all controllers to trigger AER and reset threshold 00:08:00.348 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.348 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.348 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.348 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:00.348 Cleaning up... 00:08:00.348 00:08:00.348 real 0m0.240s 00:08:00.348 user 0m0.088s 00:08:00.348 sys 0m0.104s 00:08:00.348 09:01:39 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.348 ************************************ 00:08:00.348 END TEST nvme_single_aen 00:08:00.348 09:01:39 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:00.348 ************************************ 00:08:00.349 09:01:39 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:00.349 09:01:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.349 09:01:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.349 09:01:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.349 ************************************ 00:08:00.349 START TEST nvme_doorbell_aers 00:08:00.349 ************************************ 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:00.349 09:01:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:00.607 [2024-11-20 09:01:39.352338] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:10.573 Executing: test_write_invalid_db 00:08:10.573 Waiting for AER completion... 00:08:10.573 Failure: test_write_invalid_db 00:08:10.573 00:08:10.573 Executing: test_invalid_db_write_overflow_sq 00:08:10.573 Waiting for AER completion... 00:08:10.573 Failure: test_invalid_db_write_overflow_sq 00:08:10.573 00:08:10.573 Executing: test_invalid_db_write_overflow_cq 00:08:10.573 Waiting for AER completion... 00:08:10.573 Failure: test_invalid_db_write_overflow_cq 00:08:10.573 00:08:10.573 09:01:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:10.573 09:01:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:10.573 [2024-11-20 09:01:49.405376] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:20.540 Executing: test_write_invalid_db 00:08:20.540 Waiting for AER completion... 00:08:20.540 Failure: test_write_invalid_db 00:08:20.540 00:08:20.540 Executing: test_invalid_db_write_overflow_sq 00:08:20.540 Waiting for AER completion... 00:08:20.540 Failure: test_invalid_db_write_overflow_sq 00:08:20.540 00:08:20.540 Executing: test_invalid_db_write_overflow_cq 00:08:20.540 Waiting for AER completion... 00:08:20.540 Failure: test_invalid_db_write_overflow_cq 00:08:20.540 00:08:20.540 09:01:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:20.540 09:01:59 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:20.540 [2024-11-20 09:01:59.435648] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:30.569 Executing: test_write_invalid_db 00:08:30.569 Waiting for AER completion... 00:08:30.569 Failure: test_write_invalid_db 00:08:30.569 00:08:30.569 Executing: test_invalid_db_write_overflow_sq 00:08:30.569 Waiting for AER completion... 00:08:30.569 Failure: test_invalid_db_write_overflow_sq 00:08:30.569 00:08:30.569 Executing: test_invalid_db_write_overflow_cq 00:08:30.569 Waiting for AER completion... 00:08:30.569 Failure: test_invalid_db_write_overflow_cq 00:08:30.569 00:08:30.569 09:02:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.569 09:02:09 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:30.827 [2024-11-20 09:02:09.497259] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 Executing: test_write_invalid_db 00:08:40.845 Waiting for AER completion... 00:08:40.845 Failure: test_write_invalid_db 00:08:40.845 00:08:40.845 Executing: test_invalid_db_write_overflow_sq 00:08:40.845 Waiting for AER completion... 00:08:40.845 Failure: test_invalid_db_write_overflow_sq 00:08:40.845 00:08:40.845 Executing: test_invalid_db_write_overflow_cq 00:08:40.845 Waiting for AER completion... 00:08:40.845 Failure: test_invalid_db_write_overflow_cq 00:08:40.845 00:08:40.845 00:08:40.845 real 0m40.184s 00:08:40.845 user 0m34.237s 00:08:40.845 sys 0m5.561s 00:08:40.845 09:02:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.845 09:02:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:40.845 ************************************ 00:08:40.845 END TEST nvme_doorbell_aers 00:08:40.845 ************************************ 00:08:40.845 09:02:19 nvme -- nvme/nvme.sh@97 -- # uname 00:08:40.845 09:02:19 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:40.845 09:02:19 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:40.845 09:02:19 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:40.845 09:02:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.845 09:02:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:40.845 ************************************ 00:08:40.845 START TEST nvme_multi_aen 00:08:40.845 ************************************ 00:08:40.845 09:02:19 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:40.845 [2024-11-20 09:02:19.537583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.537662] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.537675] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.539366] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.539421] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.539432] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.540541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.540574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.540583] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.541648] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.541681] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 [2024-11-20 09:02:19.541690] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63347) is not found. Dropping the request. 00:08:40.845 Child process pid: 63873 00:08:40.845 [Child] Asynchronous Event Request test 00:08:40.845 [Child] Attached to 0000:00:10.0 00:08:40.845 [Child] Attached to 0000:00:11.0 00:08:40.845 [Child] Attached to 0000:00:13.0 00:08:40.845 [Child] Attached to 0000:00:12.0 00:08:40.845 [Child] Registering asynchronous event callbacks... 00:08:40.845 [Child] Getting orig temperature thresholds of all controllers 00:08:40.845 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.846 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.846 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.846 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:40.846 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:40.846 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.846 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.846 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.846 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:40.846 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.846 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.846 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.846 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:40.846 [Child] Cleaning up... 00:08:41.103 Asynchronous Event Request test 00:08:41.103 Attached to 0000:00:10.0 00:08:41.103 Attached to 0000:00:11.0 00:08:41.103 Attached to 0000:00:13.0 00:08:41.103 Attached to 0000:00:12.0 00:08:41.103 Reset controller to setup AER completions for this process 00:08:41.103 Registering asynchronous event callbacks... 00:08:41.103 Getting orig temperature thresholds of all controllers 00:08:41.103 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.103 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.103 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.103 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:41.103 Setting all controllers temperature threshold low to trigger AER 00:08:41.103 Waiting for all controllers temperature threshold to be set lower 00:08:41.103 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.103 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:41.103 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.103 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:41.103 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.103 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:41.103 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:41.103 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:41.103 Waiting for all controllers to trigger AER and reset threshold 00:08:41.103 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.103 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.103 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.103 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:41.103 Cleaning up... 00:08:41.104 00:08:41.104 real 0m0.458s 00:08:41.104 user 0m0.148s 00:08:41.104 sys 0m0.211s 00:08:41.104 09:02:19 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.104 ************************************ 00:08:41.104 END TEST nvme_multi_aen 00:08:41.104 ************************************ 00:08:41.104 09:02:19 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 09:02:19 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:41.104 09:02:19 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:41.104 09:02:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.104 09:02:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.104 ************************************ 00:08:41.104 START TEST nvme_startup 00:08:41.104 ************************************ 00:08:41.104 09:02:19 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:41.412 Initializing NVMe Controllers 00:08:41.412 Attached to 0000:00:10.0 00:08:41.412 Attached to 0000:00:11.0 00:08:41.412 Attached to 0000:00:13.0 00:08:41.412 Attached to 0000:00:12.0 00:08:41.412 Initialization complete. 00:08:41.412 Time used:145746.547 (us). 00:08:41.412 00:08:41.412 real 0m0.209s 00:08:41.412 user 0m0.073s 00:08:41.412 sys 0m0.088s 00:08:41.412 09:02:20 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.412 09:02:20 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:41.412 ************************************ 00:08:41.412 END TEST nvme_startup 00:08:41.412 ************************************ 00:08:41.412 09:02:20 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:41.412 09:02:20 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:41.412 09:02:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.412 09:02:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:41.412 ************************************ 00:08:41.412 START TEST nvme_multi_secondary 00:08:41.412 ************************************ 00:08:41.412 09:02:20 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:41.412 09:02:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63924 00:08:41.412 09:02:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:41.412 09:02:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63925 00:08:41.412 09:02:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:41.412 09:02:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:44.702 Initializing NVMe Controllers 00:08:44.702 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.702 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.702 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.702 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.702 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:44.702 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:44.702 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:44.702 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:44.702 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:44.702 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:44.702 Initialization complete. Launching workers. 00:08:44.702 ======================================================== 00:08:44.702 Latency(us) 00:08:44.702 Device Information : IOPS MiB/s Average min max 00:08:44.702 PCIE (0000:00:10.0) NSID 1 from core 1: 7391.59 28.87 2163.21 778.95 5919.46 00:08:44.702 PCIE (0000:00:11.0) NSID 1 from core 1: 7391.59 28.87 2164.21 785.98 6052.61 00:08:44.702 PCIE (0000:00:13.0) NSID 1 from core 1: 7391.59 28.87 2164.18 807.31 6849.97 00:08:44.702 PCIE (0000:00:12.0) NSID 1 from core 1: 7391.59 28.87 2164.14 791.71 7982.27 00:08:44.702 PCIE (0000:00:12.0) NSID 2 from core 1: 7391.59 28.87 2164.13 806.38 5797.80 00:08:44.702 PCIE (0000:00:12.0) NSID 3 from core 1: 7391.59 28.87 2164.18 806.97 5795.96 00:08:44.702 ======================================================== 00:08:44.702 Total : 44349.53 173.24 2164.01 778.95 7982.27 00:08:44.702 00:08:44.702 Initializing NVMe Controllers 00:08:44.702 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.702 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.702 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.702 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.702 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:44.702 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:44.702 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:44.702 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:44.702 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:44.702 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:44.702 Initialization complete. Launching workers. 00:08:44.702 ======================================================== 00:08:44.702 Latency(us) 00:08:44.702 Device Information : IOPS MiB/s Average min max 00:08:44.702 PCIE (0000:00:10.0) NSID 1 from core 2: 3023.75 11.81 5289.22 1223.29 13016.96 00:08:44.702 PCIE (0000:00:11.0) NSID 1 from core 2: 3023.75 11.81 5290.66 1244.89 16323.94 00:08:44.702 PCIE (0000:00:13.0) NSID 1 from core 2: 3023.75 11.81 5290.26 1119.74 16655.05 00:08:44.702 PCIE (0000:00:12.0) NSID 1 from core 2: 3023.75 11.81 5291.00 1232.50 13503.76 00:08:44.702 PCIE (0000:00:12.0) NSID 2 from core 2: 3023.75 11.81 5290.96 1119.85 13667.90 00:08:44.702 PCIE (0000:00:12.0) NSID 3 from core 2: 3023.75 11.81 5290.47 997.53 13215.80 00:08:44.702 ======================================================== 00:08:44.702 Total : 18142.52 70.87 5290.43 997.53 16655.05 00:08:44.702 00:08:44.702 09:02:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63924 00:08:46.601 Initializing NVMe Controllers 00:08:46.601 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:46.601 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:46.601 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:46.601 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:46.601 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:46.601 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:46.601 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:46.601 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:46.601 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:46.601 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:46.601 Initialization complete. Launching workers. 00:08:46.601 ======================================================== 00:08:46.601 Latency(us) 00:08:46.601 Device Information : IOPS MiB/s Average min max 00:08:46.601 PCIE (0000:00:10.0) NSID 1 from core 0: 10463.41 40.87 1527.82 701.26 6185.14 00:08:46.602 PCIE (0000:00:11.0) NSID 1 from core 0: 10463.41 40.87 1528.75 714.15 6004.47 00:08:46.602 PCIE (0000:00:13.0) NSID 1 from core 0: 10463.41 40.87 1528.72 713.71 6206.38 00:08:46.602 PCIE (0000:00:12.0) NSID 1 from core 0: 10463.41 40.87 1528.70 716.61 7532.81 00:08:46.602 PCIE (0000:00:12.0) NSID 2 from core 0: 10463.41 40.87 1528.67 716.38 8428.84 00:08:46.602 PCIE (0000:00:12.0) NSID 3 from core 0: 10463.41 40.87 1528.65 716.01 8967.74 00:08:46.602 ======================================================== 00:08:46.602 Total : 62780.47 245.24 1528.55 701.26 8967.74 00:08:46.602 00:08:46.602 09:02:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63925 00:08:46.602 09:02:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63994 00:08:46.602 09:02:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:46.602 09:02:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63995 00:08:46.602 09:02:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:46.602 09:02:25 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:49.892 Initializing NVMe Controllers 00:08:49.892 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.892 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.892 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.892 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.892 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:49.892 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:49.892 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:49.892 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:49.892 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:49.892 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:49.892 Initialization complete. Launching workers. 00:08:49.892 ======================================================== 00:08:49.892 Latency(us) 00:08:49.892 Device Information : IOPS MiB/s Average min max 00:08:49.892 PCIE (0000:00:10.0) NSID 1 from core 1: 7912.40 30.91 2020.73 712.66 5701.30 00:08:49.892 PCIE (0000:00:11.0) NSID 1 from core 1: 7912.40 30.91 2021.74 733.26 6465.62 00:08:49.892 PCIE (0000:00:13.0) NSID 1 from core 1: 7912.40 30.91 2021.78 723.80 6770.91 00:08:49.892 PCIE (0000:00:12.0) NSID 1 from core 1: 7912.40 30.91 2021.74 719.44 6475.11 00:08:49.892 PCIE (0000:00:12.0) NSID 2 from core 1: 7912.40 30.91 2021.70 726.71 6437.00 00:08:49.892 PCIE (0000:00:12.0) NSID 3 from core 1: 7912.40 30.91 2021.68 721.69 5878.08 00:08:49.892 ======================================================== 00:08:49.892 Total : 47474.37 185.45 2021.56 712.66 6770.91 00:08:49.892 00:08:50.151 Initializing NVMe Controllers 00:08:50.151 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:50.151 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:50.151 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:50.151 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:50.151 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:50.151 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:50.151 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:50.151 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:50.151 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:50.151 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:50.151 Initialization complete. Launching workers. 00:08:50.151 ======================================================== 00:08:50.151 Latency(us) 00:08:50.151 Device Information : IOPS MiB/s Average min max 00:08:50.151 PCIE (0000:00:10.0) NSID 1 from core 0: 7924.70 30.96 2017.68 714.10 6427.14 00:08:50.151 PCIE (0000:00:11.0) NSID 1 from core 0: 7924.70 30.96 2018.65 736.09 6522.56 00:08:50.151 PCIE (0000:00:13.0) NSID 1 from core 0: 7924.70 30.96 2018.73 734.13 5930.93 00:08:50.151 PCIE (0000:00:12.0) NSID 1 from core 0: 7924.70 30.96 2018.85 730.40 5949.96 00:08:50.151 PCIE (0000:00:12.0) NSID 2 from core 0: 7924.70 30.96 2018.94 737.09 5977.12 00:08:50.151 PCIE (0000:00:12.0) NSID 3 from core 0: 7924.70 30.96 2019.03 735.70 6458.95 00:08:50.151 ======================================================== 00:08:50.151 Total : 47548.23 185.74 2018.65 714.10 6522.56 00:08:50.151 00:08:52.057 Initializing NVMe Controllers 00:08:52.057 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:52.057 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:52.057 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:52.057 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:52.057 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:52.057 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:52.057 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:52.057 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:52.057 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:52.057 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:52.057 Initialization complete. Launching workers. 00:08:52.057 ======================================================== 00:08:52.057 Latency(us) 00:08:52.057 Device Information : IOPS MiB/s Average min max 00:08:52.057 PCIE (0000:00:10.0) NSID 1 from core 2: 4693.66 18.33 3406.65 733.61 13518.95 00:08:52.057 PCIE (0000:00:11.0) NSID 1 from core 2: 4693.66 18.33 3408.28 723.23 12624.79 00:08:52.057 PCIE (0000:00:13.0) NSID 1 from core 2: 4693.66 18.33 3408.22 741.94 14077.09 00:08:52.057 PCIE (0000:00:12.0) NSID 1 from core 2: 4693.66 18.33 3407.83 721.10 13933.46 00:08:52.057 PCIE (0000:00:12.0) NSID 2 from core 2: 4693.66 18.33 3408.12 681.48 13817.47 00:08:52.057 PCIE (0000:00:12.0) NSID 3 from core 2: 4693.66 18.33 3407.21 635.90 13288.66 00:08:52.057 ======================================================== 00:08:52.057 Total : 28161.98 110.01 3407.72 635.90 14077.09 00:08:52.057 00:08:52.057 ************************************ 00:08:52.057 END TEST nvme_multi_secondary 00:08:52.057 ************************************ 00:08:52.057 09:02:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63994 00:08:52.057 09:02:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63995 00:08:52.057 00:08:52.057 real 0m10.803s 00:08:52.057 user 0m18.388s 00:08:52.057 sys 0m0.679s 00:08:52.057 09:02:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.057 09:02:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:52.057 09:02:30 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:52.057 09:02:30 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:52.058 09:02:30 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62951 ]] 00:08:52.058 09:02:30 nvme -- common/autotest_common.sh@1094 -- # kill 62951 00:08:52.058 09:02:30 nvme -- common/autotest_common.sh@1095 -- # wait 62951 00:08:52.058 [2024-11-20 09:02:30.903032] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.903110] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.903142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.903163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.905763] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.905824] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.905845] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.905864] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.907707] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.907744] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.907754] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.907764] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.909217] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.909251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.909261] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.058 [2024-11-20 09:02:30.909272] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63872) is not found. Dropping the request. 00:08:52.328 09:02:31 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:52.328 09:02:31 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:52.328 09:02:31 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:52.328 09:02:31 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.328 09:02:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.328 09:02:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.328 ************************************ 00:08:52.328 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:52.328 ************************************ 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:52.328 * Looking for test storage... 00:08:52.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.328 --rc genhtml_branch_coverage=1 00:08:52.328 --rc genhtml_function_coverage=1 00:08:52.328 --rc genhtml_legend=1 00:08:52.328 --rc geninfo_all_blocks=1 00:08:52.328 --rc geninfo_unexecuted_blocks=1 00:08:52.328 00:08:52.328 ' 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.328 --rc genhtml_branch_coverage=1 00:08:52.328 --rc genhtml_function_coverage=1 00:08:52.328 --rc genhtml_legend=1 00:08:52.328 --rc geninfo_all_blocks=1 00:08:52.328 --rc geninfo_unexecuted_blocks=1 00:08:52.328 00:08:52.328 ' 00:08:52.328 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.328 --rc genhtml_branch_coverage=1 00:08:52.328 --rc genhtml_function_coverage=1 00:08:52.328 --rc genhtml_legend=1 00:08:52.328 --rc geninfo_all_blocks=1 00:08:52.329 --rc geninfo_unexecuted_blocks=1 00:08:52.329 00:08:52.329 ' 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.329 --rc genhtml_branch_coverage=1 00:08:52.329 --rc genhtml_function_coverage=1 00:08:52.329 --rc genhtml_legend=1 00:08:52.329 --rc geninfo_all_blocks=1 00:08:52.329 --rc geninfo_unexecuted_blocks=1 00:08:52.329 00:08:52.329 ' 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:52.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64162 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64162 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64162 ']' 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:52.329 09:02:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:52.587 [2024-11-20 09:02:31.291132] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:08:52.587 [2024-11-20 09:02:31.291350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64162 ] 00:08:52.587 [2024-11-20 09:02:31.453995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.845 [2024-11-20 09:02:31.555411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.845 [2024-11-20 09:02:31.555862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.845 [2024-11-20 09:02:31.556062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.845 [2024-11-20 09:02:31.556088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.413 nvme0n1 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_arzTc.txt 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.413 true 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732093352 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64185 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:53.413 09:02:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:55.315 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:55.315 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.315 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.574 [2024-11-20 09:02:34.233541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:55.574 [2024-11-20 09:02:34.233813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:55.574 [2024-11-20 09:02:34.233840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:55.574 [2024-11-20 09:02:34.233853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:55.574 [2024-11-20 09:02:34.235866] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:55.574 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64185 00:08:55.574 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.574 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64185 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64185 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_arzTc.txt 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_arzTc.txt 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64162 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64162 ']' 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64162 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64162 00:08:55.575 killing process with pid 64162 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64162' 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64162 00:08:55.575 09:02:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64162 00:08:57.479 09:02:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:57.479 09:02:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:57.479 00:08:57.479 real 0m4.840s 00:08:57.479 user 0m17.277s 00:08:57.479 sys 0m0.473s 00:08:57.479 ************************************ 00:08:57.479 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:57.479 ************************************ 00:08:57.479 09:02:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.479 09:02:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:57.479 09:02:35 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:57.479 09:02:35 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:57.479 09:02:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.479 09:02:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.479 09:02:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.479 ************************************ 00:08:57.479 START TEST nvme_fio 00:08:57.479 ************************************ 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:57.479 09:02:35 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:57.479 09:02:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.479 09:02:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:57.479 09:02:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:57.743 09:02:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:57.744 09:02:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:57.744 09:02:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:57.744 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:57.744 fio-3.35 00:08:57.744 Starting 1 thread 00:09:03.029 00:09:03.029 test: (groupid=0, jobs=1): err= 0: pid=64321: Wed Nov 20 09:02:41 2024 00:09:03.029 read: IOPS=18.4k, BW=71.9MiB/s (75.4MB/s)(147MiB/2038msec) 00:09:03.029 slat (nsec): min=3373, max=87516, avg=5135.35, stdev=2456.27 00:09:03.029 clat (usec): min=713, max=41243, avg=2910.44, stdev=1518.39 00:09:03.029 lat (usec): min=717, max=41248, avg=2915.57, stdev=1519.11 00:09:03.029 clat percentiles (usec): 00:09:03.029 | 1.00th=[ 1303], 5.00th=[ 1860], 10.00th=[ 2180], 20.00th=[ 2343], 00:09:03.029 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:03.029 | 70.00th=[ 2802], 80.00th=[ 3064], 90.00th=[ 4228], 95.00th=[ 5604], 00:09:03.029 | 99.00th=[ 7439], 99.50th=[ 8979], 99.90th=[10945], 99.95th=[40109], 00:09:03.029 | 99.99th=[41157] 00:09:03.029 bw ( KiB/s): min=34608, max=97576, per=100.00%, avg=75014.00, stdev=28437.61, samples=4 00:09:03.029 iops : min= 8652, max=24394, avg=18753.50, stdev=7109.40, samples=4 00:09:03.029 write: IOPS=18.4k, BW=72.0MiB/s (75.5MB/s)(147MiB/2038msec); 0 zone resets 00:09:03.029 slat (nsec): min=3467, max=76186, avg=5381.06, stdev=2423.26 00:09:03.029 clat (usec): min=736, max=54312, avg=4012.32, stdev=5021.24 00:09:03.029 lat (usec): min=740, max=54316, avg=4017.70, stdev=5021.53 00:09:03.029 clat percentiles (usec): 00:09:03.029 | 1.00th=[ 1483], 5.00th=[ 2008], 10.00th=[ 2245], 20.00th=[ 2376], 00:09:03.029 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2638], 00:09:03.029 | 70.00th=[ 2868], 80.00th=[ 3359], 90.00th=[ 5800], 95.00th=[14222], 00:09:03.029 | 99.00th=[27919], 99.50th=[38536], 99.90th=[49546], 99.95th=[52691], 00:09:03.029 | 99.99th=[53740] 00:09:03.029 bw ( KiB/s): min=34184, max=96840, per=100.00%, avg=74898.00, stdev=28427.92, samples=4 00:09:03.029 iops : min= 8546, max=24210, avg=18724.50, stdev=7106.98, samples=4 00:09:03.029 lat (usec) : 750=0.01%, 1000=0.07% 00:09:03.029 lat (msec) : 2=5.71%, 4=80.30%, 10=10.47%, 20=2.35%, 50=1.05% 00:09:03.029 lat (msec) : 100=0.05% 00:09:03.029 cpu : usr=99.26%, sys=0.00%, ctx=6, majf=0, minf=607 00:09:03.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:03.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:03.029 issued rwts: total=37535,37548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.029 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:03.029 00:09:03.029 Run status group 0 (all jobs): 00:09:03.029 READ: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=147MiB (154MB), run=2038-2038msec 00:09:03.029 WRITE: bw=72.0MiB/s (75.5MB/s), 72.0MiB/s-72.0MiB/s (75.5MB/s-75.5MB/s), io=147MiB (154MB), run=2038-2038msec 00:09:03.029 ----------------------------------------------------- 00:09:03.029 Suppressions used: 00:09:03.029 count bytes template 00:09:03.029 1 32 /usr/src/fio/parse.c 00:09:03.029 1 8 libtcmalloc_minimal.so 00:09:03.029 ----------------------------------------------------- 00:09:03.029 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:03.029 09:02:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:03.029 09:02:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:03.291 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:03.291 fio-3.35 00:09:03.291 Starting 1 thread 00:09:08.582 00:09:08.582 test: (groupid=0, jobs=1): err= 0: pid=64382: Wed Nov 20 09:02:47 2024 00:09:08.582 read: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(147MiB/2001msec) 00:09:08.582 slat (usec): min=4, max=159, avg= 5.91, stdev= 3.09 00:09:08.582 clat (usec): min=677, max=10043, avg=3365.71, stdev=1054.51 00:09:08.582 lat (usec): min=689, max=10109, avg=3371.62, stdev=1056.03 00:09:08.582 clat percentiles (usec): 00:09:08.582 | 1.00th=[ 2114], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2704], 00:09:08.582 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3130], 00:09:08.582 | 70.00th=[ 3294], 80.00th=[ 3785], 90.00th=[ 5080], 95.00th=[ 5866], 00:09:08.582 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 7701], 99.95th=[ 8455], 00:09:08.582 | 99.99th=[ 9765] 00:09:08.582 bw ( KiB/s): min=74768, max=77952, per=100.00%, avg=76144.00, stdev=1635.37, samples=3 00:09:08.582 iops : min=18692, max=19488, avg=19036.00, stdev=408.84, samples=3 00:09:08.582 write: IOPS=18.9k, BW=73.7MiB/s (77.3MB/s)(148MiB/2001msec); 0 zone resets 00:09:08.582 slat (usec): min=4, max=283, avg= 6.06, stdev= 3.30 00:09:08.582 clat (usec): min=585, max=9861, avg=3391.66, stdev=1053.97 00:09:08.582 lat (usec): min=598, max=9875, avg=3397.71, stdev=1055.44 00:09:08.582 clat percentiles (usec): 00:09:08.582 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2704], 00:09:08.582 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3130], 00:09:08.582 | 70.00th=[ 3326], 80.00th=[ 3818], 90.00th=[ 5080], 95.00th=[ 5866], 00:09:08.582 | 99.00th=[ 7046], 99.50th=[ 7373], 99.90th=[ 8094], 99.95th=[ 8848], 00:09:08.582 | 99.99th=[ 9634] 00:09:08.582 bw ( KiB/s): min=75048, max=78216, per=100.00%, avg=76277.33, stdev=1698.95, samples=3 00:09:08.582 iops : min=18762, max=19554, avg=19069.33, stdev=424.74, samples=3 00:09:08.582 lat (usec) : 750=0.01%, 1000=0.01% 00:09:08.582 lat (msec) : 2=0.59%, 4=81.21%, 10=18.19%, 20=0.01% 00:09:08.582 cpu : usr=98.45%, sys=0.55%, ctx=16, majf=0, minf=608 00:09:08.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:08.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.582 issued rwts: total=37752,37777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.582 00:09:08.582 Run status group 0 (all jobs): 00:09:08.582 READ: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=147MiB (155MB), run=2001-2001msec 00:09:08.582 WRITE: bw=73.7MiB/s (77.3MB/s), 73.7MiB/s-73.7MiB/s (77.3MB/s-77.3MB/s), io=148MiB (155MB), run=2001-2001msec 00:09:08.842 ----------------------------------------------------- 00:09:08.842 Suppressions used: 00:09:08.842 count bytes template 00:09:08.842 1 32 /usr/src/fio/parse.c 00:09:08.842 1 8 libtcmalloc_minimal.so 00:09:08.842 ----------------------------------------------------- 00:09:08.842 00:09:08.842 09:02:47 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:08.842 09:02:47 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:08.842 09:02:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:08.842 09:02:47 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:09.112 09:02:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:09.112 09:02:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:09.374 09:02:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:09.374 09:02:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:09.374 09:02:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:09.374 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:09.374 fio-3.35 00:09:09.374 Starting 1 thread 00:09:15.962 00:09:15.962 test: (groupid=0, jobs=1): err= 0: pid=64444: Wed Nov 20 09:02:53 2024 00:09:15.962 read: IOPS=16.0k, BW=62.5MiB/s (65.6MB/s)(125MiB/2001msec) 00:09:15.962 slat (nsec): min=4855, max=86377, avg=6873.10, stdev=3666.11 00:09:15.962 clat (usec): min=215, max=11013, avg=3952.39, stdev=1277.58 00:09:15.962 lat (usec): min=220, max=11020, avg=3959.27, stdev=1279.14 00:09:15.962 clat percentiles (usec): 00:09:15.962 | 1.00th=[ 2311], 5.00th=[ 2769], 10.00th=[ 2900], 20.00th=[ 3032], 00:09:15.962 | 30.00th=[ 3163], 40.00th=[ 3261], 50.00th=[ 3425], 60.00th=[ 3687], 00:09:15.962 | 70.00th=[ 4178], 80.00th=[ 4883], 90.00th=[ 5866], 95.00th=[ 6718], 00:09:15.962 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9634], 99.95th=[10028], 00:09:15.962 | 99.99th=[10421] 00:09:15.962 bw ( KiB/s): min=54290, max=69144, per=97.76%, avg=62584.67, stdev=7577.52, samples=3 00:09:15.962 iops : min=13572, max=17286, avg=15646.00, stdev=1894.65, samples=3 00:09:15.962 write: IOPS=16.0k, BW=62.6MiB/s (65.7MB/s)(125MiB/2001msec); 0 zone resets 00:09:15.962 slat (usec): min=4, max=112, avg= 7.13, stdev= 3.82 00:09:15.962 clat (usec): min=239, max=11202, avg=4003.25, stdev=1294.37 00:09:15.962 lat (usec): min=244, max=11228, avg=4010.38, stdev=1295.94 00:09:15.962 clat percentiles (usec): 00:09:15.962 | 1.00th=[ 2343], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3064], 00:09:15.962 | 30.00th=[ 3195], 40.00th=[ 3326], 50.00th=[ 3490], 60.00th=[ 3752], 00:09:15.962 | 70.00th=[ 4228], 80.00th=[ 4948], 90.00th=[ 5932], 95.00th=[ 6849], 00:09:15.962 | 99.00th=[ 8094], 99.50th=[ 8586], 99.90th=[ 9765], 99.95th=[10159], 00:09:15.962 | 99.99th=[10814] 00:09:15.962 bw ( KiB/s): min=53407, max=68360, per=96.93%, avg=62175.67, stdev=7804.30, samples=3 00:09:15.963 iops : min=13351, max=17090, avg=15543.67, stdev=1951.50, samples=3 00:09:15.963 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.02% 00:09:15.963 lat (msec) : 2=0.48%, 4=66.23%, 10=33.17%, 20=0.07% 00:09:15.963 cpu : usr=98.65%, sys=0.05%, ctx=5, majf=0, minf=607 00:09:15.963 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:15.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:15.963 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:15.963 issued rwts: total=32024,32089,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:15.963 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:15.963 00:09:15.963 Run status group 0 (all jobs): 00:09:15.963 READ: bw=62.5MiB/s (65.6MB/s), 62.5MiB/s-62.5MiB/s (65.6MB/s-65.6MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:15.963 WRITE: bw=62.6MiB/s (65.7MB/s), 62.6MiB/s-62.6MiB/s (65.7MB/s-65.7MB/s), io=125MiB (131MB), run=2001-2001msec 00:09:15.963 ----------------------------------------------------- 00:09:15.963 Suppressions used: 00:09:15.963 count bytes template 00:09:15.963 1 32 /usr/src/fio/parse.c 00:09:15.963 1 8 libtcmalloc_minimal.so 00:09:15.963 ----------------------------------------------------- 00:09:15.963 00:09:15.963 09:02:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:15.963 09:02:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:15.963 09:02:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:15.963 09:02:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:15.963 09:02:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:15.963 09:02:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:15.963 09:02:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:15.963 09:02:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:15.963 09:02:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:15.963 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:15.963 fio-3.35 00:09:15.963 Starting 1 thread 00:09:24.096 00:09:24.097 test: (groupid=0, jobs=1): err= 0: pid=64509: Wed Nov 20 09:03:02 2024 00:09:24.097 read: IOPS=18.3k, BW=71.5MiB/s (74.9MB/s)(144MiB/2010msec) 00:09:24.097 slat (nsec): min=3477, max=78272, avg=5809.00, stdev=3106.89 00:09:24.097 clat (usec): min=665, max=12537, avg=2959.55, stdev=1145.75 00:09:24.097 lat (usec): min=670, max=12543, avg=2965.35, stdev=1147.10 00:09:24.097 clat percentiles (usec): 00:09:24.097 | 1.00th=[ 1172], 5.00th=[ 1516], 10.00th=[ 1827], 20.00th=[ 2278], 00:09:24.097 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2835], 00:09:24.097 | 70.00th=[ 3032], 80.00th=[ 3523], 90.00th=[ 4555], 95.00th=[ 5407], 00:09:24.097 | 99.00th=[ 6652], 99.50th=[ 7111], 99.90th=[ 9503], 99.95th=[10683], 00:09:24.097 | 99.99th=[12518] 00:09:24.097 bw ( KiB/s): min=64424, max=82264, per=100.00%, avg=73476.00, stdev=8256.67, samples=4 00:09:24.097 iops : min=16106, max=20566, avg=18369.00, stdev=2064.17, samples=4 00:09:24.097 write: IOPS=18.3k, BW=71.5MiB/s (74.9MB/s)(144MiB/2010msec); 0 zone resets 00:09:24.097 slat (nsec): min=3565, max=73317, avg=5962.99, stdev=2971.74 00:09:24.097 clat (usec): min=728, max=28440, avg=4008.65, stdev=3403.68 00:09:24.097 lat (usec): min=733, max=28445, avg=4014.61, stdev=3404.05 00:09:24.097 clat percentiles (usec): 00:09:24.097 | 1.00th=[ 1254], 5.00th=[ 1680], 10.00th=[ 2073], 20.00th=[ 2409], 00:09:24.097 | 30.00th=[ 2540], 40.00th=[ 2671], 50.00th=[ 2802], 60.00th=[ 3032], 00:09:24.097 | 70.00th=[ 3490], 80.00th=[ 4555], 90.00th=[ 6718], 95.00th=[12649], 00:09:24.097 | 99.00th=[18744], 99.50th=[21365], 99.90th=[24249], 99.95th=[25297], 00:09:24.097 | 99.99th=[27657] 00:09:24.097 bw ( KiB/s): min=63992, max=82000, per=100.00%, avg=73346.00, stdev=8460.41, samples=4 00:09:24.097 iops : min=15996, max=20500, avg=18336.00, stdev=2115.84, samples=4 00:09:24.097 lat (usec) : 750=0.02%, 1000=0.15% 00:09:24.097 lat (msec) : 2=10.78%, 4=69.45%, 10=16.06%, 20=3.16%, 50=0.38% 00:09:24.097 cpu : usr=98.86%, sys=0.15%, ctx=3, majf=0, minf=606 00:09:24.097 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:24.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:24.097 issued rwts: total=36772,36766,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.097 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:24.097 00:09:24.097 Run status group 0 (all jobs): 00:09:24.097 READ: bw=71.5MiB/s (74.9MB/s), 71.5MiB/s-71.5MiB/s (74.9MB/s-74.9MB/s), io=144MiB (151MB), run=2010-2010msec 00:09:24.097 WRITE: bw=71.5MiB/s (74.9MB/s), 71.5MiB/s-71.5MiB/s (74.9MB/s-74.9MB/s), io=144MiB (151MB), run=2010-2010msec 00:09:24.097 ----------------------------------------------------- 00:09:24.097 Suppressions used: 00:09:24.097 count bytes template 00:09:24.097 1 32 /usr/src/fio/parse.c 00:09:24.097 1 8 libtcmalloc_minimal.so 00:09:24.097 ----------------------------------------------------- 00:09:24.097 00:09:24.097 ************************************ 00:09:24.097 END TEST nvme_fio 00:09:24.097 ************************************ 00:09:24.097 09:03:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:24.097 09:03:02 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:24.097 00:09:24.097 real 0m26.530s 00:09:24.097 user 0m16.289s 00:09:24.097 sys 0m18.469s 00:09:24.097 09:03:02 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.097 09:03:02 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:24.097 ************************************ 00:09:24.097 END TEST nvme 00:09:24.097 ************************************ 00:09:24.097 00:09:24.097 real 1m36.271s 00:09:24.097 user 3m38.350s 00:09:24.097 sys 0m29.050s 00:09:24.097 09:03:02 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:24.097 09:03:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:24.097 09:03:02 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:24.097 09:03:02 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.097 09:03:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:24.097 09:03:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:24.097 09:03:02 -- common/autotest_common.sh@10 -- # set +x 00:09:24.097 ************************************ 00:09:24.097 START TEST nvme_scc 00:09:24.097 ************************************ 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:24.097 * Looking for test storage... 00:09:24.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:24.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.097 --rc genhtml_branch_coverage=1 00:09:24.097 --rc genhtml_function_coverage=1 00:09:24.097 --rc genhtml_legend=1 00:09:24.097 --rc geninfo_all_blocks=1 00:09:24.097 --rc geninfo_unexecuted_blocks=1 00:09:24.097 00:09:24.097 ' 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:24.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.097 --rc genhtml_branch_coverage=1 00:09:24.097 --rc genhtml_function_coverage=1 00:09:24.097 --rc genhtml_legend=1 00:09:24.097 --rc geninfo_all_blocks=1 00:09:24.097 --rc geninfo_unexecuted_blocks=1 00:09:24.097 00:09:24.097 ' 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:24.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.097 --rc genhtml_branch_coverage=1 00:09:24.097 --rc genhtml_function_coverage=1 00:09:24.097 --rc genhtml_legend=1 00:09:24.097 --rc geninfo_all_blocks=1 00:09:24.097 --rc geninfo_unexecuted_blocks=1 00:09:24.097 00:09:24.097 ' 00:09:24.097 09:03:02 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:24.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:24.097 --rc genhtml_branch_coverage=1 00:09:24.097 --rc genhtml_function_coverage=1 00:09:24.097 --rc genhtml_legend=1 00:09:24.097 --rc geninfo_all_blocks=1 00:09:24.097 --rc geninfo_unexecuted_blocks=1 00:09:24.097 00:09:24.097 ' 00:09:24.097 09:03:02 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.097 09:03:02 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:24.097 09:03:02 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:24.097 09:03:02 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:24.097 09:03:02 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.097 09:03:02 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.097 09:03:02 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.097 09:03:02 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.098 09:03:02 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.098 09:03:02 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:24.098 09:03:02 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:24.098 09:03:02 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:24.098 09:03:02 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:24.098 09:03:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:24.098 09:03:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:24.098 09:03:02 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:24.098 09:03:02 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:24.358 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:24.358 Waiting for block devices as requested 00:09:24.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.620 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.620 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.620 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:29.984 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:29.984 09:03:08 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:29.984 09:03:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.984 09:03:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:29.984 09:03:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.984 09:03:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:29.984 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:29.985 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.986 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:29.987 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:29.988 09:03:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:29.988 09:03:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.988 09:03:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:29.989 09:03:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.989 09:03:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:29.989 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.990 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:29.991 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.992 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:29.993 09:03:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:29.993 09:03:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:29.993 09:03:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:29.993 09:03:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.993 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:29.994 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.995 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:29.996 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:29.997 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:29.998 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.266 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:30.267 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:30.268 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:30.269 09:03:08 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:30.269 09:03:08 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:30.269 09:03:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:30.269 09:03:08 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:30.269 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.270 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:30.271 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:30.272 09:03:08 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:30.272 09:03:08 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:30.272 09:03:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:30.273 09:03:09 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:30.273 09:03:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:30.273 09:03:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:30.273 09:03:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:30.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.437 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.437 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.437 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.437 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:31.438 09:03:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.438 09:03:10 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.438 09:03:10 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.438 09:03:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.438 ************************************ 00:09:31.438 START TEST nvme_simple_copy 00:09:31.438 ************************************ 00:09:31.438 09:03:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:31.696 Initializing NVMe Controllers 00:09:31.696 Attaching to 0000:00:10.0 00:09:31.696 Controller supports SCC. Attached to 0000:00:10.0 00:09:31.696 Namespace ID: 1 size: 6GB 00:09:31.696 Initialization complete. 00:09:31.696 00:09:31.696 Controller QEMU NVMe Ctrl (12340 ) 00:09:31.696 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:31.696 Namespace Block Size:4096 00:09:31.696 Writing LBAs 0 to 63 with Random Data 00:09:31.696 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:31.696 LBAs matching Written Data: 64 00:09:31.696 00:09:31.696 real 0m0.280s 00:09:31.696 user 0m0.102s 00:09:31.696 sys 0m0.075s 00:09:31.696 09:03:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.696 ************************************ 00:09:31.696 END TEST nvme_simple_copy 00:09:31.696 ************************************ 00:09:31.696 09:03:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:31.696 ************************************ 00:09:31.696 END TEST nvme_scc 00:09:31.696 ************************************ 00:09:31.696 00:09:31.696 real 0m7.988s 00:09:31.696 user 0m1.177s 00:09:31.696 sys 0m1.474s 00:09:31.696 09:03:10 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.696 09:03:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:31.696 09:03:10 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:31.696 09:03:10 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:31.696 09:03:10 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:31.696 09:03:10 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:31.696 09:03:10 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:31.696 09:03:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.696 09:03:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.696 09:03:10 -- common/autotest_common.sh@10 -- # set +x 00:09:31.696 ************************************ 00:09:31.696 START TEST nvme_fdp 00:09:31.696 ************************************ 00:09:31.696 09:03:10 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:31.956 * Looking for test storage... 00:09:31.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.956 09:03:10 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.956 --rc genhtml_branch_coverage=1 00:09:31.956 --rc genhtml_function_coverage=1 00:09:31.956 --rc genhtml_legend=1 00:09:31.956 --rc geninfo_all_blocks=1 00:09:31.956 --rc geninfo_unexecuted_blocks=1 00:09:31.956 00:09:31.956 ' 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.956 --rc genhtml_branch_coverage=1 00:09:31.956 --rc genhtml_function_coverage=1 00:09:31.956 --rc genhtml_legend=1 00:09:31.956 --rc geninfo_all_blocks=1 00:09:31.956 --rc geninfo_unexecuted_blocks=1 00:09:31.956 00:09:31.956 ' 00:09:31.956 09:03:10 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.956 --rc genhtml_branch_coverage=1 00:09:31.956 --rc genhtml_function_coverage=1 00:09:31.956 --rc genhtml_legend=1 00:09:31.956 --rc geninfo_all_blocks=1 00:09:31.957 --rc geninfo_unexecuted_blocks=1 00:09:31.957 00:09:31.957 ' 00:09:31.957 09:03:10 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.957 --rc genhtml_branch_coverage=1 00:09:31.957 --rc genhtml_function_coverage=1 00:09:31.957 --rc genhtml_legend=1 00:09:31.957 --rc geninfo_all_blocks=1 00:09:31.957 --rc geninfo_unexecuted_blocks=1 00:09:31.957 00:09:31.957 ' 00:09:31.957 09:03:10 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:31.957 09:03:10 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.957 09:03:10 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.957 09:03:10 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.957 09:03:10 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.957 09:03:10 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.957 09:03:10 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.957 09:03:10 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.957 09:03:10 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:31.957 09:03:10 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:31.957 09:03:10 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:31.957 09:03:10 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:31.957 09:03:10 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:32.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:32.479 Waiting for block devices as requested 00:09:32.479 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.479 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.740 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.740 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:38.154 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:38.154 09:03:16 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:38.154 09:03:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.154 09:03:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:38.154 09:03:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.154 09:03:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.154 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.155 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.156 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:38.157 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.158 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:38.159 09:03:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.159 09:03:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:38.159 09:03:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.159 09:03:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:38.159 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:38.160 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.161 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.162 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.163 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:38.164 09:03:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.164 09:03:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:38.164 09:03:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.164 09:03:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.164 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.165 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.166 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.167 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:38.168 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.169 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:38.170 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:38.171 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:38.172 09:03:16 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:38.172 09:03:16 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:38.172 09:03:16 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:38.172 09:03:16 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.172 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:38.173 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:38.174 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:38.175 09:03:16 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:38.175 09:03:16 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:38.175 09:03:16 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:38.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.032 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.032 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.032 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.032 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:39.032 09:03:17 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.032 09:03:17 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.032 09:03:17 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.032 09:03:17 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.033 ************************************ 00:09:39.033 START TEST nvme_flexible_data_placement 00:09:39.033 ************************************ 00:09:39.033 09:03:17 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:39.291 Initializing NVMe Controllers 00:09:39.291 Attaching to 0000:00:13.0 00:09:39.291 Controller supports FDP Attached to 0000:00:13.0 00:09:39.291 Namespace ID: 1 Endurance Group ID: 1 00:09:39.291 Initialization complete. 00:09:39.291 00:09:39.291 ================================== 00:09:39.291 == FDP tests for Namespace: #01 == 00:09:39.291 ================================== 00:09:39.291 00:09:39.291 Get Feature: FDP: 00:09:39.291 ================= 00:09:39.291 Enabled: Yes 00:09:39.291 FDP configuration Index: 0 00:09:39.291 00:09:39.291 FDP configurations log page 00:09:39.291 =========================== 00:09:39.291 Number of FDP configurations: 1 00:09:39.291 Version: 0 00:09:39.291 Size: 112 00:09:39.291 FDP Configuration Descriptor: 0 00:09:39.291 Descriptor Size: 96 00:09:39.291 Reclaim Group Identifier format: 2 00:09:39.291 FDP Volatile Write Cache: Not Present 00:09:39.291 FDP Configuration: Valid 00:09:39.291 Vendor Specific Size: 0 00:09:39.291 Number of Reclaim Groups: 2 00:09:39.291 Number of Recalim Unit Handles: 8 00:09:39.291 Max Placement Identifiers: 128 00:09:39.291 Number of Namespaces Suppprted: 256 00:09:39.291 Reclaim unit Nominal Size: 6000000 bytes 00:09:39.291 Estimated Reclaim Unit Time Limit: Not Reported 00:09:39.291 RUH Desc #000: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #001: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #002: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #003: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #004: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #005: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #006: RUH Type: Initially Isolated 00:09:39.291 RUH Desc #007: RUH Type: Initially Isolated 00:09:39.291 00:09:39.291 FDP reclaim unit handle usage log page 00:09:39.291 ====================================== 00:09:39.291 Number of Reclaim Unit Handles: 8 00:09:39.291 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:39.291 RUH Usage Desc #001: RUH Attributes: Unused 00:09:39.291 RUH Usage Desc #002: RUH Attributes: Unused 00:09:39.291 RUH Usage Desc #003: RUH Attributes: Unused 00:09:39.291 RUH Usage Desc #004: RUH Attributes: Unused 00:09:39.291 RUH Usage Desc #005: RUH Attributes: Unused 00:09:39.291 RUH Usage Desc #006: RUH Attributes: Unused 00:09:39.291 RUH Usage Desc #007: RUH Attributes: Unused 00:09:39.291 00:09:39.291 FDP statistics log page 00:09:39.291 ======================= 00:09:39.291 Host bytes with metadata written: 858497024 00:09:39.291 Media bytes with metadata written: 858591232 00:09:39.291 Media bytes erased: 0 00:09:39.291 00:09:39.291 FDP Reclaim unit handle status 00:09:39.291 ============================== 00:09:39.291 Number of RUHS descriptors: 2 00:09:39.291 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000002d46 00:09:39.291 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:39.291 00:09:39.291 FDP write on placement id: 0 success 00:09:39.291 00:09:39.291 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:39.291 00:09:39.291 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:39.291 00:09:39.291 Get Feature: FDP Events for Placement handle: #0 00:09:39.291 ======================== 00:09:39.291 Number of FDP Events: 6 00:09:39.291 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:39.291 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:39.291 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:39.291 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:39.291 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:39.291 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:39.291 00:09:39.291 FDP events log page 00:09:39.291 =================== 00:09:39.291 Number of FDP events: 1 00:09:39.291 FDP Event #0: 00:09:39.291 Event Type: RU Not Written to Capacity 00:09:39.291 Placement Identifier: Valid 00:09:39.291 NSID: Valid 00:09:39.291 Location: Valid 00:09:39.291 Placement Identifier: 0 00:09:39.291 Event Timestamp: e 00:09:39.292 Namespace Identifier: 1 00:09:39.292 Reclaim Group Identifier: 0 00:09:39.292 Reclaim Unit Handle Identifier: 0 00:09:39.292 00:09:39.292 FDP test passed 00:09:39.292 00:09:39.292 real 0m0.246s 00:09:39.292 ************************************ 00:09:39.292 END TEST nvme_flexible_data_placement 00:09:39.292 ************************************ 00:09:39.292 user 0m0.079s 00:09:39.292 sys 0m0.065s 00:09:39.292 09:03:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.292 09:03:18 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:39.292 00:09:39.292 real 0m7.571s 00:09:39.292 user 0m1.033s 00:09:39.292 sys 0m1.362s 00:09:39.292 09:03:18 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.292 ************************************ 00:09:39.292 END TEST nvme_fdp 00:09:39.292 ************************************ 00:09:39.292 09:03:18 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:39.292 09:03:18 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:39.292 09:03:18 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.292 09:03:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.292 09:03:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.292 09:03:18 -- common/autotest_common.sh@10 -- # set +x 00:09:39.550 ************************************ 00:09:39.550 START TEST nvme_rpc 00:09:39.550 ************************************ 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:39.550 * Looking for test storage... 00:09:39.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.550 09:03:18 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.550 --rc genhtml_branch_coverage=1 00:09:39.550 --rc genhtml_function_coverage=1 00:09:39.550 --rc genhtml_legend=1 00:09:39.550 --rc geninfo_all_blocks=1 00:09:39.550 --rc geninfo_unexecuted_blocks=1 00:09:39.550 00:09:39.550 ' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.550 --rc genhtml_branch_coverage=1 00:09:39.550 --rc genhtml_function_coverage=1 00:09:39.550 --rc genhtml_legend=1 00:09:39.550 --rc geninfo_all_blocks=1 00:09:39.550 --rc geninfo_unexecuted_blocks=1 00:09:39.550 00:09:39.550 ' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.550 --rc genhtml_branch_coverage=1 00:09:39.550 --rc genhtml_function_coverage=1 00:09:39.550 --rc genhtml_legend=1 00:09:39.550 --rc geninfo_all_blocks=1 00:09:39.550 --rc geninfo_unexecuted_blocks=1 00:09:39.550 00:09:39.550 ' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.550 --rc genhtml_branch_coverage=1 00:09:39.550 --rc genhtml_function_coverage=1 00:09:39.550 --rc genhtml_legend=1 00:09:39.550 --rc geninfo_all_blocks=1 00:09:39.550 --rc geninfo_unexecuted_blocks=1 00:09:39.550 00:09:39.550 ' 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:39.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65868 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65868 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65868 ']' 00:09:39.550 09:03:18 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.550 09:03:18 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.809 [2024-11-20 09:03:18.491230] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:39.809 [2024-11-20 09:03:18.491842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65868 ] 00:09:39.809 [2024-11-20 09:03:18.652753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.069 [2024-11-20 09:03:18.754526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.069 [2024-11-20 09:03:18.754651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.637 09:03:19 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.637 09:03:19 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:40.637 09:03:19 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:40.896 Nvme0n1 00:09:40.896 09:03:19 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:40.896 09:03:19 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:40.896 request: 00:09:40.896 { 00:09:40.896 "bdev_name": "Nvme0n1", 00:09:40.896 "filename": "non_existing_file", 00:09:40.896 "method": "bdev_nvme_apply_firmware", 00:09:40.896 "req_id": 1 00:09:40.896 } 00:09:40.896 Got JSON-RPC error response 00:09:40.896 response: 00:09:40.896 { 00:09:40.896 "code": -32603, 00:09:40.896 "message": "open file failed." 00:09:40.896 } 00:09:40.896 09:03:19 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:40.896 09:03:19 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:40.896 09:03:19 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:41.155 09:03:20 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:41.155 09:03:20 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65868 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65868 ']' 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65868 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65868 00:09:41.155 killing process with pid 65868 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65868' 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65868 00:09:41.155 09:03:20 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65868 00:09:43.056 ************************************ 00:09:43.056 END TEST nvme_rpc 00:09:43.057 ************************************ 00:09:43.057 00:09:43.057 real 0m3.246s 00:09:43.057 user 0m6.193s 00:09:43.057 sys 0m0.468s 00:09:43.057 09:03:21 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:43.057 09:03:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 09:03:21 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:43.057 09:03:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:43.057 09:03:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.057 09:03:21 -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 ************************************ 00:09:43.057 START TEST nvme_rpc_timeouts 00:09:43.057 ************************************ 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:43.057 * Looking for test storage... 00:09:43.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.057 09:03:21 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:43.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.057 --rc genhtml_branch_coverage=1 00:09:43.057 --rc genhtml_function_coverage=1 00:09:43.057 --rc genhtml_legend=1 00:09:43.057 --rc geninfo_all_blocks=1 00:09:43.057 --rc geninfo_unexecuted_blocks=1 00:09:43.057 00:09:43.057 ' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65933 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65933 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65966 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65966 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65966 ']' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.057 09:03:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:43.057 09:03:21 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 [2024-11-20 09:03:21.709822] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:09:43.057 [2024-11-20 09:03:21.710102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65966 ] 00:09:43.057 [2024-11-20 09:03:21.870294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:43.316 [2024-11-20 09:03:21.975012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.316 [2024-11-20 09:03:21.975228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.882 09:03:22 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.882 09:03:22 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:43.882 Checking default timeout settings: 00:09:43.882 09:03:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:43.882 09:03:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.139 Making settings changes with rpc: 00:09:44.139 09:03:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:44.139 09:03:22 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:44.397 Check default vs. modified settings: 00:09:44.397 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:44.397 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:44.656 Setting action_on_timeout is changed as expected. 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:44.656 Setting timeout_us is changed as expected. 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:44.656 Setting timeout_admin_us is changed as expected. 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65933 /tmp/settings_modified_65933 00:09:44.656 09:03:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65966 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65966 ']' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65966 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65966 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.656 killing process with pid 65966 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65966' 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65966 00:09:44.656 09:03:23 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65966 00:09:46.031 RPC TIMEOUT SETTING TEST PASSED. 00:09:46.031 09:03:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:46.031 00:09:46.031 real 0m3.369s 00:09:46.031 user 0m6.568s 00:09:46.031 sys 0m0.492s 00:09:46.031 09:03:24 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.031 09:03:24 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:46.031 ************************************ 00:09:46.031 END TEST nvme_rpc_timeouts 00:09:46.031 ************************************ 00:09:46.031 09:03:24 -- spdk/autotest.sh@239 -- # uname -s 00:09:46.031 09:03:24 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:46.031 09:03:24 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.031 09:03:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.031 09:03:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.031 09:03:24 -- common/autotest_common.sh@10 -- # set +x 00:09:46.031 ************************************ 00:09:46.031 START TEST sw_hotplug 00:09:46.031 ************************************ 00:09:46.031 09:03:24 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:46.290 * Looking for test storage... 00:09:46.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:46.290 09:03:24 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:46.290 09:03:24 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:09:46.290 09:03:24 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:46.290 09:03:25 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:46.290 09:03:25 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:46.291 09:03:25 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:46.291 09:03:25 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:46.291 09:03:25 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:46.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.291 --rc genhtml_branch_coverage=1 00:09:46.291 --rc genhtml_function_coverage=1 00:09:46.291 --rc genhtml_legend=1 00:09:46.291 --rc geninfo_all_blocks=1 00:09:46.291 --rc geninfo_unexecuted_blocks=1 00:09:46.291 00:09:46.291 ' 00:09:46.291 09:03:25 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:46.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.291 --rc genhtml_branch_coverage=1 00:09:46.291 --rc genhtml_function_coverage=1 00:09:46.291 --rc genhtml_legend=1 00:09:46.291 --rc geninfo_all_blocks=1 00:09:46.291 --rc geninfo_unexecuted_blocks=1 00:09:46.291 00:09:46.291 ' 00:09:46.291 09:03:25 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:46.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.291 --rc genhtml_branch_coverage=1 00:09:46.291 --rc genhtml_function_coverage=1 00:09:46.291 --rc genhtml_legend=1 00:09:46.291 --rc geninfo_all_blocks=1 00:09:46.291 --rc geninfo_unexecuted_blocks=1 00:09:46.291 00:09:46.291 ' 00:09:46.291 09:03:25 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:46.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:46.291 --rc genhtml_branch_coverage=1 00:09:46.291 --rc genhtml_function_coverage=1 00:09:46.291 --rc genhtml_legend=1 00:09:46.291 --rc geninfo_all_blocks=1 00:09:46.291 --rc geninfo_unexecuted_blocks=1 00:09:46.291 00:09:46.291 ' 00:09:46.291 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.561 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.561 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.561 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.561 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:46.561 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:46.561 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:46.561 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:46.561 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:46.562 09:03:25 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:46.820 09:03:25 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:46.820 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:46.820 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:46.820 09:03:25 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.080 Waiting for block devices as requested 00:09:47.080 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.338 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.338 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.338 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.611 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.611 09:03:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:52.611 09:03:31 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:52.869 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:52.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:52.869 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:53.127 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:53.127 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.127 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:53.385 09:03:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66824 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:53.385 09:03:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:53.385 09:03:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:53.385 09:03:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:53.385 09:03:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:53.385 09:03:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:53.385 09:03:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:53.643 Initializing NVMe Controllers 00:09:53.643 Attaching to 0000:00:10.0 00:09:53.643 Attaching to 0000:00:11.0 00:09:53.643 Attached to 0000:00:11.0 00:09:53.643 Attached to 0000:00:10.0 00:09:53.643 Initialization complete. Starting I/O... 00:09:53.643 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:53.643 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:53.643 00:09:54.576 QEMU NVMe Ctrl (12341 ): 2929 I/Os completed (+2929) 00:09:54.576 QEMU NVMe Ctrl (12340 ): 3040 I/Os completed (+3040) 00:09:54.576 00:09:55.509 QEMU NVMe Ctrl (12341 ): 6225 I/Os completed (+3296) 00:09:55.509 QEMU NVMe Ctrl (12340 ): 6412 I/Os completed (+3372) 00:09:55.509 00:09:56.460 QEMU NVMe Ctrl (12341 ): 9568 I/Os completed (+3343) 00:09:56.460 QEMU NVMe Ctrl (12340 ): 9746 I/Os completed (+3334) 00:09:56.460 00:09:57.831 QEMU NVMe Ctrl (12341 ): 12947 I/Os completed (+3379) 00:09:57.831 QEMU NVMe Ctrl (12340 ): 13020 I/Os completed (+3274) 00:09:57.831 00:09:58.764 QEMU NVMe Ctrl (12341 ): 16248 I/Os completed (+3301) 00:09:58.764 QEMU NVMe Ctrl (12340 ): 16219 I/Os completed (+3199) 00:09:58.764 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.330 [2024-11-20 09:03:38.190147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.330 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.330 [2024-11-20 09:03:38.191459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.191517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.191535] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.191553] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.330 [2024-11-20 09:03:38.193292] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.193340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.193357] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.193371] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.330 [2024-11-20 09:03:38.213071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.330 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.330 [2024-11-20 09:03:38.214166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.214210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.214232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.214248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.330 [2024-11-20 09:03:38.215940] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.215979] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.215994] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 [2024-11-20 09:03:38.216008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.330 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.330 EAL: Scan for (pci) bus failed. 00:09:59.330 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.586 Attaching to 0000:00:10.0 00:09:59.586 Attached to 0000:00:10.0 00:09:59.586 QEMU NVMe Ctrl (12340 ): 8 I/Os completed (+8) 00:09:59.586 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.586 09:03:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:59.586 Attaching to 0000:00:11.0 00:09:59.587 Attached to 0000:00:11.0 00:10:00.515 QEMU NVMe Ctrl (12340 ): 3424 I/Os completed (+3416) 00:10:00.515 QEMU NVMe Ctrl (12341 ): 3064 I/Os completed (+3064) 00:10:00.515 00:10:01.883 QEMU NVMe Ctrl (12340 ): 6747 I/Os completed (+3323) 00:10:01.883 QEMU NVMe Ctrl (12341 ): 6276 I/Os completed (+3212) 00:10:01.883 00:10:02.815 QEMU NVMe Ctrl (12340 ): 9954 I/Os completed (+3207) 00:10:02.815 QEMU NVMe Ctrl (12341 ): 9362 I/Os completed (+3086) 00:10:02.815 00:10:03.748 QEMU NVMe Ctrl (12340 ): 13270 I/Os completed (+3316) 00:10:03.748 QEMU NVMe Ctrl (12341 ): 12640 I/Os completed (+3278) 00:10:03.748 00:10:04.765 QEMU NVMe Ctrl (12340 ): 16460 I/Os completed (+3190) 00:10:04.765 QEMU NVMe Ctrl (12341 ): 15809 I/Os completed (+3169) 00:10:04.765 00:10:05.698 QEMU NVMe Ctrl (12340 ): 19750 I/Os completed (+3290) 00:10:05.698 QEMU NVMe Ctrl (12341 ): 19028 I/Os completed (+3219) 00:10:05.698 00:10:06.631 QEMU NVMe Ctrl (12340 ): 23026 I/Os completed (+3276) 00:10:06.631 QEMU NVMe Ctrl (12341 ): 22149 I/Os completed (+3121) 00:10:06.631 00:10:07.564 QEMU NVMe Ctrl (12340 ): 26220 I/Os completed (+3194) 00:10:07.564 QEMU NVMe Ctrl (12341 ): 25255 I/Os completed (+3106) 00:10:07.564 00:10:08.495 QEMU NVMe Ctrl (12340 ): 29536 I/Os completed (+3316) 00:10:08.495 QEMU NVMe Ctrl (12341 ): 28585 I/Os completed (+3330) 00:10:08.495 00:10:09.866 QEMU NVMe Ctrl (12340 ): 33292 I/Os completed (+3756) 00:10:09.866 QEMU NVMe Ctrl (12341 ): 32594 I/Os completed (+4009) 00:10:09.866 00:10:10.798 QEMU NVMe Ctrl (12340 ): 36636 I/Os completed (+3344) 00:10:10.798 QEMU NVMe Ctrl (12341 ): 35757 I/Os completed (+3163) 00:10:10.798 00:10:11.733 QEMU NVMe Ctrl (12340 ): 39727 I/Os completed (+3091) 00:10:11.733 QEMU NVMe Ctrl (12341 ): 38965 I/Os completed (+3208) 00:10:11.733 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.733 [2024-11-20 09:03:50.457788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:11.733 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:11.733 [2024-11-20 09:03:50.458977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.459026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.459043] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.459061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:11.733 [2024-11-20 09:03:50.461015] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.461065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.461079] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.461094] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:11.733 [2024-11-20 09:03:50.481051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:11.733 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:11.733 [2024-11-20 09:03:50.482107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.482146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.482166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.482181] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:11.733 [2024-11-20 09:03:50.483809] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.483844] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.483860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 [2024-11-20 09:03:50.483885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:11.733 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:11.733 EAL: Scan for (pci) bus failed. 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:11.733 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:11.733 Attaching to 0000:00:10.0 00:10:11.733 Attached to 0000:00:10.0 00:10:11.991 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:11.991 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:11.991 09:03:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:11.991 Attaching to 0000:00:11.0 00:10:11.991 Attached to 0000:00:11.0 00:10:12.557 QEMU NVMe Ctrl (12340 ): 2499 I/Os completed (+2499) 00:10:12.557 QEMU NVMe Ctrl (12341 ): 2366 I/Os completed (+2366) 00:10:12.557 00:10:13.548 QEMU NVMe Ctrl (12340 ): 5951 I/Os completed (+3452) 00:10:13.548 QEMU NVMe Ctrl (12341 ): 5904 I/Os completed (+3538) 00:10:13.548 00:10:14.484 QEMU NVMe Ctrl (12340 ): 9066 I/Os completed (+3115) 00:10:14.484 QEMU NVMe Ctrl (12341 ): 9080 I/Os completed (+3176) 00:10:14.484 00:10:15.858 QEMU NVMe Ctrl (12340 ): 12264 I/Os completed (+3198) 00:10:15.858 QEMU NVMe Ctrl (12341 ): 12304 I/Os completed (+3224) 00:10:15.858 00:10:16.794 QEMU NVMe Ctrl (12340 ): 15325 I/Os completed (+3061) 00:10:16.794 QEMU NVMe Ctrl (12341 ): 15392 I/Os completed (+3088) 00:10:16.794 00:10:17.735 QEMU NVMe Ctrl (12340 ): 18427 I/Os completed (+3102) 00:10:17.735 QEMU NVMe Ctrl (12341 ): 18509 I/Os completed (+3117) 00:10:17.735 00:10:18.713 QEMU NVMe Ctrl (12340 ): 21600 I/Os completed (+3173) 00:10:18.713 QEMU NVMe Ctrl (12341 ): 21689 I/Os completed (+3180) 00:10:18.713 00:10:19.647 QEMU NVMe Ctrl (12340 ): 25021 I/Os completed (+3421) 00:10:19.647 QEMU NVMe Ctrl (12341 ): 25080 I/Os completed (+3391) 00:10:19.647 00:10:20.578 QEMU NVMe Ctrl (12340 ): 28504 I/Os completed (+3483) 00:10:20.578 QEMU NVMe Ctrl (12341 ): 28594 I/Os completed (+3514) 00:10:20.578 00:10:21.518 QEMU NVMe Ctrl (12340 ): 31706 I/Os completed (+3202) 00:10:21.518 QEMU NVMe Ctrl (12341 ): 31803 I/Os completed (+3209) 00:10:21.518 00:10:22.898 QEMU NVMe Ctrl (12340 ): 34809 I/Os completed (+3103) 00:10:22.898 QEMU NVMe Ctrl (12341 ): 34892 I/Os completed (+3089) 00:10:22.898 00:10:23.464 QEMU NVMe Ctrl (12340 ): 38213 I/Os completed (+3404) 00:10:23.464 QEMU NVMe Ctrl (12341 ): 38344 I/Os completed (+3452) 00:10:23.464 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.029 [2024-11-20 09:04:02.699740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:24.029 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:24.029 [2024-11-20 09:04:02.700959] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.701012] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.701030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.701050] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.029 [2024-11-20 09:04:02.703135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.703198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.703220] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.703243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:24.029 [2024-11-20 09:04:02.725107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:24.029 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:24.029 [2024-11-20 09:04:02.726202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.726248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.726267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.726283] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.029 [2024-11-20 09:04:02.728025] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.728066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.728083] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 [2024-11-20 09:04:02.728097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:24.029 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:24.029 EAL: Scan for (pci) bus failed. 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:24.029 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:24.029 Attaching to 0000:00:10.0 00:10:24.029 Attached to 0000:00:10.0 00:10:24.286 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:24.286 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:24.286 09:04:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:24.286 Attaching to 0000:00:11.0 00:10:24.286 Attached to 0000:00:11.0 00:10:24.286 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:24.286 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:24.286 [2024-11-20 09:04:02.974381] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:36.481 09:04:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:36.482 09:04:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.482 09:04:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.78 00:10:36.482 09:04:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.78 00:10:36.482 09:04:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:36.482 09:04:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.78 00:10:36.482 09:04:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.78 2 00:10:36.482 remove_attach_helper took 42.78s to complete (handling 2 nvme drive(s)) 09:04:14 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66824 00:10:43.130 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66824) - No such process 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66824 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67369 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67369 00:10:43.130 09:04:20 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:43.130 09:04:20 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67369 ']' 00:10:43.130 09:04:20 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.130 09:04:20 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.130 09:04:20 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.130 09:04:20 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.130 09:04:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.130 [2024-11-20 09:04:21.058210] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:10:43.130 [2024-11-20 09:04:21.058358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67369 ] 00:10:43.130 [2024-11-20 09:04:21.265478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.130 [2024-11-20 09:04:21.403197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:43.130 09:04:22 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:43.130 09:04:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.714 09:04:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.714 09:04:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.714 09:04:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:49.714 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:49.714 [2024-11-20 09:04:28.110191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:49.714 [2024-11-20 09:04:28.112409] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.714 [2024-11-20 09:04:28.112621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.714 [2024-11-20 09:04:28.112649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.714 [2024-11-20 09:04:28.112678] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.714 [2024-11-20 09:04:28.112690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.714 [2024-11-20 09:04:28.112701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.714 [2024-11-20 09:04:28.112711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.715 [2024-11-20 09:04:28.112723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.715 [2024-11-20 09:04:28.112731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.715 [2024-11-20 09:04:28.112748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.715 [2024-11-20 09:04:28.112757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.715 [2024-11-20 09:04:28.112768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.715 [2024-11-20 09:04:28.510199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:49.715 [2024-11-20 09:04:28.512343] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.715 [2024-11-20 09:04:28.512399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.715 [2024-11-20 09:04:28.512415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.715 [2024-11-20 09:04:28.512440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.715 [2024-11-20 09:04:28.512453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.715 [2024-11-20 09:04:28.512463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.715 [2024-11-20 09:04:28.512475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.715 [2024-11-20 09:04:28.512484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.715 [2024-11-20 09:04:28.512495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.715 [2024-11-20 09:04:28.512505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:49.715 [2024-11-20 09:04:28.512516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:49.715 [2024-11-20 09:04:28.512525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:49.715 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:49.715 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:49.715 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:49.715 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:49.715 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:49.715 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:49.715 09:04:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.715 09:04:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:49.715 09:04:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:49.975 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:49.976 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:49.976 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:49.976 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:50.237 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:50.237 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:50.237 09:04:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.522 09:04:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.522 09:04:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.522 09:04:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.522 09:04:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.522 [2024-11-20 09:04:41.010387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:02.522 [2024-11-20 09:04:41.012118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.012234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.522 [2024-11-20 09:04:41.012307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.522 [2024-11-20 09:04:41.012380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.012408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.522 [2024-11-20 09:04:41.012469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.522 [2024-11-20 09:04:41.012528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.012550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.522 [2024-11-20 09:04:41.012614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.522 [2024-11-20 09:04:41.012643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.012660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.522 [2024-11-20 09:04:41.012703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.522 09:04:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.522 09:04:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.522 09:04:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:02.522 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:02.522 [2024-11-20 09:04:41.410398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:02.522 [2024-11-20 09:04:41.411858] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.411907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.522 [2024-11-20 09:04:41.411923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.522 [2024-11-20 09:04:41.411941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.411950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.522 [2024-11-20 09:04:41.411958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.522 [2024-11-20 09:04:41.411968] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.522 [2024-11-20 09:04:41.411975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.523 [2024-11-20 09:04:41.411984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.523 [2024-11-20 09:04:41.411991] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:02.523 [2024-11-20 09:04:41.412000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:02.523 [2024-11-20 09:04:41.412007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:02.780 09:04:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.780 09:04:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.780 09:04:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:02.780 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:03.037 09:04:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.395 09:04:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.395 09:04:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.395 09:04:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.395 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.395 09:04:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.395 09:04:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.395 [2024-11-20 09:04:53.910624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:15.395 [2024-11-20 09:04:53.912151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.395 [2024-11-20 09:04:53.912179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.395 [2024-11-20 09:04:53.912190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.395 [2024-11-20 09:04:53.912209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.395 [2024-11-20 09:04:53.912217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.395 [2024-11-20 09:04:53.912229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.395 [2024-11-20 09:04:53.912236] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.395 [2024-11-20 09:04:53.912245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.395 [2024-11-20 09:04:53.912252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.395 [2024-11-20 09:04:53.912260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.395 [2024-11-20 09:04:53.912267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.396 [2024-11-20 09:04:53.912275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.396 09:04:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.396 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:15.396 09:04:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:15.654 [2024-11-20 09:04:54.410635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:15.654 [2024-11-20 09:04:54.412112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.654 [2024-11-20 09:04:54.412223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.654 [2024-11-20 09:04:54.412294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.654 [2024-11-20 09:04:54.412353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.654 [2024-11-20 09:04:54.412373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.654 [2024-11-20 09:04:54.412397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.654 [2024-11-20 09:04:54.412487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.654 [2024-11-20 09:04:54.412496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.654 [2024-11-20 09:04:54.412507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.654 [2024-11-20 09:04:54.412515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:15.654 [2024-11-20 09:04:54.412524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:15.654 [2024-11-20 09:04:54.412530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:15.654 09:04:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.654 09:04:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:15.654 09:04:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.654 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:15.912 09:04:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.72 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.72 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.72 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.72 2 00:11:28.098 remove_attach_helper took 44.72s to complete (handling 2 nvme drive(s)) 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:28.098 09:05:06 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:28.098 09:05:06 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.749 09:05:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.749 09:05:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.749 09:05:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.749 [2024-11-20 09:05:12.864892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:34.749 [2024-11-20 09:05:12.866218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:12.866250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:12.866262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 [2024-11-20 09:05:12.866282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:12.866289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:12.866298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 [2024-11-20 09:05:12.866305] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:12.866314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:12.866320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 [2024-11-20 09:05:12.866330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:12.866337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:12.866348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:34.749 09:05:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.749 09:05:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.749 09:05:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.749 09:05:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:34.749 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.749 [2024-11-20 09:05:13.564914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:34.749 [2024-11-20 09:05:13.567948] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:13.567994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:13.568010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 [2024-11-20 09:05:13.568031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:13.568043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:13.568052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 [2024-11-20 09:05:13.568063] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:13.568072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.749 [2024-11-20 09:05:13.568083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.749 [2024-11-20 09:05:13.568092] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.749 [2024-11-20 09:05:13.568103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.750 [2024-11-20 09:05:13.568112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.008 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:35.008 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.008 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.008 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.008 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.008 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.008 09:05:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.008 09:05:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.008 09:05:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.265 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:35.265 09:05:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:35.265 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:35.522 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.522 09:05:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.755 09:05:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.755 09:05:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.755 09:05:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.755 09:05:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.755 09:05:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.755 [2024-11-20 09:05:26.265131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:47.755 [2024-11-20 09:05:26.266440] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.755 [2024-11-20 09:05:26.266482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.755 [2024-11-20 09:05:26.266493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.755 [2024-11-20 09:05:26.266511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.755 [2024-11-20 09:05:26.266519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.755 [2024-11-20 09:05:26.266528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.755 [2024-11-20 09:05:26.266536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.755 [2024-11-20 09:05:26.266545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.755 [2024-11-20 09:05:26.266552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.755 [2024-11-20 09:05:26.266561] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:47.755 [2024-11-20 09:05:26.266571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:47.755 [2024-11-20 09:05:26.266584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:47.755 09:05:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:47.755 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:48.013 [2024-11-20 09:05:26.765146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:48.013 [2024-11-20 09:05:26.766263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.013 [2024-11-20 09:05:26.766298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.013 [2024-11-20 09:05:26.766311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.013 [2024-11-20 09:05:26.766330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.013 [2024-11-20 09:05:26.766343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.013 [2024-11-20 09:05:26.766351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.013 [2024-11-20 09:05:26.766368] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.013 [2024-11-20 09:05:26.766380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.013 [2024-11-20 09:05:26.766394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.013 [2024-11-20 09:05:26.766404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:48.013 [2024-11-20 09:05:26.766413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.013 [2024-11-20 09:05:26.766420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:48.013 09:05:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.013 09:05:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:48.013 09:05:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.013 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:48.272 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:48.272 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.272 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:48.272 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:48.272 09:05:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:48.272 09:05:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:48.272 09:05:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:48.272 09:05:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.463 09:05:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.463 09:05:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.463 09:05:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:00.463 [2024-11-20 09:05:39.165400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:00.463 [2024-11-20 09:05:39.166867] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.463 [2024-11-20 09:05:39.166910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.463 [2024-11-20 09:05:39.166922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.463 [2024-11-20 09:05:39.166941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.463 [2024-11-20 09:05:39.166948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.463 [2024-11-20 09:05:39.166957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.463 [2024-11-20 09:05:39.166965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.463 [2024-11-20 09:05:39.166976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.463 [2024-11-20 09:05:39.166983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.463 [2024-11-20 09:05:39.166992] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.463 [2024-11-20 09:05:39.166999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.463 [2024-11-20 09:05:39.167007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.463 09:05:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.463 09:05:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.463 09:05:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:00.463 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:00.721 [2024-11-20 09:05:39.565394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:00.721 [2024-11-20 09:05:39.566540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.721 [2024-11-20 09:05:39.566571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.721 [2024-11-20 09:05:39.566583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.721 [2024-11-20 09:05:39.566599] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.721 [2024-11-20 09:05:39.566609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.721 [2024-11-20 09:05:39.566616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.721 [2024-11-20 09:05:39.566626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.721 [2024-11-20 09:05:39.566633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.721 [2024-11-20 09:05:39.566641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.721 [2024-11-20 09:05:39.566648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:00.721 [2024-11-20 09:05:39.566664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:00.721 [2024-11-20 09:05:39.566671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:00.978 09:05:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.978 09:05:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:00.978 09:05:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:00.978 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:01.236 09:05:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:13.472 09:05:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:13.472 09:05:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:13.472 09:05:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:13.472 09:05:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:13.472 09:05:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:13.472 09:05:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:13.472 09:05:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.472 09:05:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.472 09:05:52 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:13.472 09:05:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.24 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.24 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:13.472 09:05:52 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.24 00:12:13.472 09:05:52 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.24 2 00:12:13.472 remove_attach_helper took 45.24s to complete (handling 2 nvme drive(s)) 09:05:52 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:13.472 09:05:52 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67369 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67369 ']' 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67369 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:13.472 09:05:52 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:13.473 09:05:52 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67369 00:12:13.473 09:05:52 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:13.473 09:05:52 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:13.473 killing process with pid 67369 00:12:13.473 09:05:52 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67369' 00:12:13.473 09:05:52 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67369 00:12:13.473 09:05:52 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67369 00:12:14.407 09:05:53 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:14.669 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:15.240 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:15.240 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:15.240 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:15.240 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:15.501 ************************************ 00:12:15.501 END TEST sw_hotplug 00:12:15.501 ************************************ 00:12:15.501 00:12:15.501 real 2m29.267s 00:12:15.501 user 1m51.132s 00:12:15.501 sys 0m16.742s 00:12:15.501 09:05:54 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:15.501 09:05:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:15.501 09:05:54 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:15.501 09:05:54 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:15.501 09:05:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.501 09:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.501 09:05:54 -- common/autotest_common.sh@10 -- # set +x 00:12:15.501 ************************************ 00:12:15.501 START TEST nvme_xnvme 00:12:15.501 ************************************ 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:15.501 * Looking for test storage... 00:12:15.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.501 --rc genhtml_branch_coverage=1 00:12:15.501 --rc genhtml_function_coverage=1 00:12:15.501 --rc genhtml_legend=1 00:12:15.501 --rc geninfo_all_blocks=1 00:12:15.501 --rc geninfo_unexecuted_blocks=1 00:12:15.501 00:12:15.501 ' 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.501 --rc genhtml_branch_coverage=1 00:12:15.501 --rc genhtml_function_coverage=1 00:12:15.501 --rc genhtml_legend=1 00:12:15.501 --rc geninfo_all_blocks=1 00:12:15.501 --rc geninfo_unexecuted_blocks=1 00:12:15.501 00:12:15.501 ' 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.501 --rc genhtml_branch_coverage=1 00:12:15.501 --rc genhtml_function_coverage=1 00:12:15.501 --rc genhtml_legend=1 00:12:15.501 --rc geninfo_all_blocks=1 00:12:15.501 --rc geninfo_unexecuted_blocks=1 00:12:15.501 00:12:15.501 ' 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.501 --rc genhtml_branch_coverage=1 00:12:15.501 --rc genhtml_function_coverage=1 00:12:15.501 --rc genhtml_legend=1 00:12:15.501 --rc geninfo_all_blocks=1 00:12:15.501 --rc geninfo_unexecuted_blocks=1 00:12:15.501 00:12:15.501 ' 00:12:15.501 09:05:54 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.501 09:05:54 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.501 09:05:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.501 09:05:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.501 09:05:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.501 09:05:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:15.501 09:05:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.501 09:05:54 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:15.501 09:05:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:15.501 ************************************ 00:12:15.501 START TEST xnvme_to_malloc_dd_copy 00:12:15.501 ************************************ 00:12:15.501 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:12:15.501 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:15.501 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:15.501 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:15.763 09:05:54 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:15.763 { 00:12:15.763 "subsystems": [ 00:12:15.763 { 00:12:15.763 "subsystem": "bdev", 00:12:15.763 "config": [ 00:12:15.763 { 00:12:15.763 "params": { 00:12:15.763 "block_size": 512, 00:12:15.763 "num_blocks": 2097152, 00:12:15.763 "name": "malloc0" 00:12:15.763 }, 00:12:15.763 "method": "bdev_malloc_create" 00:12:15.763 }, 00:12:15.763 { 00:12:15.763 "params": { 00:12:15.763 "io_mechanism": "libaio", 00:12:15.763 "filename": "/dev/nullb0", 00:12:15.763 "name": "null0" 00:12:15.763 }, 00:12:15.763 "method": "bdev_xnvme_create" 00:12:15.763 }, 00:12:15.763 { 00:12:15.763 "method": "bdev_wait_for_examine" 00:12:15.763 } 00:12:15.763 ] 00:12:15.763 } 00:12:15.763 ] 00:12:15.763 } 00:12:15.763 [2024-11-20 09:05:54.495896] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:12:15.763 [2024-11-20 09:05:54.496022] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68739 ] 00:12:15.763 [2024-11-20 09:05:54.656620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:16.023 [2024-11-20 09:05:54.765896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.934  [2024-11-20T09:05:57.792Z] Copying: 224/1024 [MB] (224 MBps) [2024-11-20T09:05:59.179Z] Copying: 450/1024 [MB] (225 MBps) [2024-11-20T09:06:00.120Z] Copying: 675/1024 [MB] (225 MBps) [2024-11-20T09:06:00.381Z] Copying: 895/1024 [MB] (219 MBps) [2024-11-20T09:06:02.924Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:12:24.005 00:12:24.005 09:06:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:24.005 09:06:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:24.005 09:06:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:24.005 09:06:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:24.005 { 00:12:24.005 "subsystems": [ 00:12:24.005 { 00:12:24.005 "subsystem": "bdev", 00:12:24.005 "config": [ 00:12:24.005 { 00:12:24.005 "params": { 00:12:24.005 "block_size": 512, 00:12:24.005 "num_blocks": 2097152, 00:12:24.005 "name": "malloc0" 00:12:24.005 }, 00:12:24.005 "method": "bdev_malloc_create" 00:12:24.005 }, 00:12:24.005 { 00:12:24.005 "params": { 00:12:24.005 "io_mechanism": "libaio", 00:12:24.005 "filename": "/dev/nullb0", 00:12:24.005 "name": "null0" 00:12:24.005 }, 00:12:24.005 "method": "bdev_xnvme_create" 00:12:24.005 }, 00:12:24.005 { 00:12:24.005 "method": "bdev_wait_for_examine" 00:12:24.005 } 00:12:24.005 ] 00:12:24.005 } 00:12:24.005 ] 00:12:24.005 } 00:12:24.005 [2024-11-20 09:06:02.813634] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:12:24.005 [2024-11-20 09:06:02.813751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68840 ] 00:12:24.264 [2024-11-20 09:06:02.970765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.264 [2024-11-20 09:06:03.054141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.164  [2024-11-20T09:06:06.018Z] Copying: 297/1024 [MB] (297 MBps) [2024-11-20T09:06:06.963Z] Copying: 597/1024 [MB] (300 MBps) [2024-11-20T09:06:07.529Z] Copying: 885/1024 [MB] (287 MBps) [2024-11-20T09:06:09.427Z] Copying: 1024/1024 [MB] (average 294 MBps) 00:12:30.508 00:12:30.508 09:06:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:30.508 09:06:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:30.509 09:06:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:30.509 09:06:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:30.509 09:06:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:30.509 09:06:09 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:30.509 { 00:12:30.509 "subsystems": [ 00:12:30.509 { 00:12:30.509 "subsystem": "bdev", 00:12:30.509 "config": [ 00:12:30.509 { 00:12:30.509 "params": { 00:12:30.509 "block_size": 512, 00:12:30.509 "num_blocks": 2097152, 00:12:30.509 "name": "malloc0" 00:12:30.509 }, 00:12:30.509 "method": "bdev_malloc_create" 00:12:30.509 }, 00:12:30.509 { 00:12:30.509 "params": { 00:12:30.509 "io_mechanism": "io_uring", 00:12:30.509 "filename": "/dev/nullb0", 00:12:30.509 "name": "null0" 00:12:30.509 }, 00:12:30.509 "method": "bdev_xnvme_create" 00:12:30.509 }, 00:12:30.509 { 00:12:30.509 "method": "bdev_wait_for_examine" 00:12:30.509 } 00:12:30.509 ] 00:12:30.509 } 00:12:30.509 ] 00:12:30.509 } 00:12:30.509 [2024-11-20 09:06:09.374917] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:12:30.509 [2024-11-20 09:06:09.375040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68920 ] 00:12:30.770 [2024-11-20 09:06:09.532653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.770 [2024-11-20 09:06:09.649341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.312  [2024-11-20T09:06:12.802Z] Copying: 237/1024 [MB] (237 MBps) [2024-11-20T09:06:13.745Z] Copying: 470/1024 [MB] (233 MBps) [2024-11-20T09:06:14.690Z] Copying: 703/1024 [MB] (233 MBps) [2024-11-20T09:06:15.263Z] Copying: 937/1024 [MB] (233 MBps) [2024-11-20T09:06:17.810Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:12:38.891 00:12:38.891 09:06:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:38.891 09:06:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:38.891 09:06:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:38.891 09:06:17 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:38.891 { 00:12:38.891 "subsystems": [ 00:12:38.891 { 00:12:38.891 "subsystem": "bdev", 00:12:38.891 "config": [ 00:12:38.891 { 00:12:38.891 "params": { 00:12:38.891 "block_size": 512, 00:12:38.891 "num_blocks": 2097152, 00:12:38.891 "name": "malloc0" 00:12:38.891 }, 00:12:38.891 "method": "bdev_malloc_create" 00:12:38.891 }, 00:12:38.891 { 00:12:38.891 "params": { 00:12:38.891 "io_mechanism": "io_uring", 00:12:38.891 "filename": "/dev/nullb0", 00:12:38.891 "name": "null0" 00:12:38.891 }, 00:12:38.891 "method": "bdev_xnvme_create" 00:12:38.891 }, 00:12:38.891 { 00:12:38.891 "method": "bdev_wait_for_examine" 00:12:38.891 } 00:12:38.891 ] 00:12:38.891 } 00:12:38.891 ] 00:12:38.891 } 00:12:38.891 [2024-11-20 09:06:17.786373] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:12:38.891 [2024-11-20 09:06:17.786530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69018 ] 00:12:39.150 [2024-11-20 09:06:17.954214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.439 [2024-11-20 09:06:18.088922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.403  [2024-11-20T09:06:21.266Z] Copying: 240/1024 [MB] (240 MBps) [2024-11-20T09:06:22.209Z] Copying: 476/1024 [MB] (236 MBps) [2024-11-20T09:06:23.593Z] Copying: 712/1024 [MB] (236 MBps) [2024-11-20T09:06:23.593Z] Copying: 948/1024 [MB] (235 MBps) [2024-11-20T09:06:26.170Z] Copying: 1024/1024 [MB] (average 236 MBps) 00:12:47.251 00:12:47.251 09:06:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:47.251 09:06:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:47.511 00:12:47.511 real 0m31.818s 00:12:47.511 user 0m28.204s 00:12:47.511 sys 0m3.005s 00:12:47.511 09:06:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.511 ************************************ 00:12:47.511 END TEST xnvme_to_malloc_dd_copy 00:12:47.511 ************************************ 00:12:47.511 09:06:26 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:47.511 09:06:26 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:47.511 09:06:26 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:47.511 09:06:26 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.511 09:06:26 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:47.511 ************************************ 00:12:47.511 START TEST xnvme_bdevperf 00:12:47.511 ************************************ 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:47.511 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:47.512 09:06:26 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:47.512 { 00:12:47.512 "subsystems": [ 00:12:47.512 { 00:12:47.512 "subsystem": "bdev", 00:12:47.512 "config": [ 00:12:47.512 { 00:12:47.512 "params": { 00:12:47.512 "io_mechanism": "libaio", 00:12:47.512 "filename": "/dev/nullb0", 00:12:47.512 "name": "null0" 00:12:47.512 }, 00:12:47.512 "method": "bdev_xnvme_create" 00:12:47.512 }, 00:12:47.512 { 00:12:47.512 "method": "bdev_wait_for_examine" 00:12:47.512 } 00:12:47.512 ] 00:12:47.512 } 00:12:47.512 ] 00:12:47.512 } 00:12:47.512 [2024-11-20 09:06:26.400434] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:12:47.512 [2024-11-20 09:06:26.400579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69140 ] 00:12:47.774 [2024-11-20 09:06:26.564605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.035 [2024-11-20 09:06:26.701468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.297 Running I/O for 5 seconds... 00:12:50.181 154688.00 IOPS, 604.25 MiB/s [2024-11-20T09:06:30.039Z] 154688.00 IOPS, 604.25 MiB/s [2024-11-20T09:06:31.426Z] 154624.00 IOPS, 604.00 MiB/s [2024-11-20T09:06:32.372Z] 154640.00 IOPS, 604.06 MiB/s 00:12:53.453 Latency(us) 00:12:53.453 [2024-11-20T09:06:32.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.453 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:53.453 null0 : 5.00 154620.04 603.98 0.00 0.00 410.91 141.78 2054.30 00:12:53.453 [2024-11-20T09:06:32.372Z] =================================================================================================================== 00:12:53.453 [2024-11-20T09:06:32.372Z] Total : 154620.04 603.98 0.00 0.00 410.91 141.78 2054.30 00:12:54.026 09:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:54.026 09:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:54.026 09:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:54.026 09:06:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:54.026 09:06:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:54.026 09:06:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:54.026 { 00:12:54.026 "subsystems": [ 00:12:54.026 { 00:12:54.026 "subsystem": "bdev", 00:12:54.026 "config": [ 00:12:54.026 { 00:12:54.026 "params": { 00:12:54.026 "io_mechanism": "io_uring", 00:12:54.026 "filename": "/dev/nullb0", 00:12:54.026 "name": "null0" 00:12:54.026 }, 00:12:54.026 "method": "bdev_xnvme_create" 00:12:54.026 }, 00:12:54.026 { 00:12:54.026 "method": "bdev_wait_for_examine" 00:12:54.026 } 00:12:54.026 ] 00:12:54.026 } 00:12:54.026 ] 00:12:54.026 } 00:12:54.026 [2024-11-20 09:06:32.799241] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:12:54.026 [2024-11-20 09:06:32.799375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69215 ] 00:12:54.288 [2024-11-20 09:06:32.956329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:54.288 [2024-11-20 09:06:33.059049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.548 Running I/O for 5 seconds... 00:12:56.432 177280.00 IOPS, 692.50 MiB/s [2024-11-20T09:06:36.737Z] 177280.00 IOPS, 692.50 MiB/s [2024-11-20T09:06:37.311Z] 177301.33 IOPS, 692.58 MiB/s [2024-11-20T09:06:38.698Z] 177312.00 IOPS, 692.62 MiB/s 00:12:59.779 Latency(us) 00:12:59.779 [2024-11-20T09:06:38.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.779 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:59.779 null0 : 5.00 177270.00 692.46 0.00 0.00 358.08 193.77 2003.89 00:12:59.780 [2024-11-20T09:06:38.699Z] =================================================================================================================== 00:12:59.780 [2024-11-20T09:06:38.699Z] Total : 177270.00 692.46 0.00 0.00 358.08 193.77 2003.89 00:13:00.352 09:06:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:00.352 09:06:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:00.352 00:13:00.352 real 0m12.735s 00:13:00.352 user 0m10.238s 00:13:00.352 sys 0m2.244s 00:13:00.352 09:06:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.352 ************************************ 00:13:00.352 END TEST xnvme_bdevperf 00:13:00.352 ************************************ 00:13:00.352 09:06:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:00.352 ************************************ 00:13:00.352 END TEST nvme_xnvme 00:13:00.352 ************************************ 00:13:00.352 00:13:00.352 real 0m44.862s 00:13:00.352 user 0m38.561s 00:13:00.352 sys 0m5.381s 00:13:00.352 09:06:39 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.352 09:06:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.352 09:06:39 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:00.352 09:06:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.352 09:06:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.352 09:06:39 -- common/autotest_common.sh@10 -- # set +x 00:13:00.352 ************************************ 00:13:00.352 START TEST blockdev_xnvme 00:13:00.352 ************************************ 00:13:00.352 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:00.352 * Looking for test storage... 00:13:00.352 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:00.352 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:00.352 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:00.352 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:00.613 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.613 09:06:39 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:00.613 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.613 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.613 --rc genhtml_branch_coverage=1 00:13:00.613 --rc genhtml_function_coverage=1 00:13:00.613 --rc genhtml_legend=1 00:13:00.613 --rc geninfo_all_blocks=1 00:13:00.613 --rc geninfo_unexecuted_blocks=1 00:13:00.613 00:13:00.613 ' 00:13:00.613 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.613 --rc genhtml_branch_coverage=1 00:13:00.613 --rc genhtml_function_coverage=1 00:13:00.613 --rc genhtml_legend=1 00:13:00.613 --rc geninfo_all_blocks=1 00:13:00.613 --rc geninfo_unexecuted_blocks=1 00:13:00.613 00:13:00.613 ' 00:13:00.613 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.613 --rc genhtml_branch_coverage=1 00:13:00.613 --rc genhtml_function_coverage=1 00:13:00.613 --rc genhtml_legend=1 00:13:00.613 --rc geninfo_all_blocks=1 00:13:00.613 --rc geninfo_unexecuted_blocks=1 00:13:00.613 00:13:00.613 ' 00:13:00.613 09:06:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:00.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.613 --rc genhtml_branch_coverage=1 00:13:00.613 --rc genhtml_function_coverage=1 00:13:00.613 --rc genhtml_legend=1 00:13:00.613 --rc geninfo_all_blocks=1 00:13:00.613 --rc geninfo_unexecuted_blocks=1 00:13:00.613 00:13:00.613 ' 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:00.613 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69360 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:00.614 09:06:39 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69360 00:13:00.614 09:06:39 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 69360 ']' 00:13:00.614 09:06:39 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.614 09:06:39 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:00.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.614 09:06:39 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.614 09:06:39 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:00.614 09:06:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:00.614 [2024-11-20 09:06:39.391598] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:00.614 [2024-11-20 09:06:39.391854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69360 ] 00:13:00.874 [2024-11-20 09:06:39.550435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.874 [2024-11-20 09:06:39.651883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.444 09:06:40 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.444 09:06:40 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:13:01.444 09:06:40 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:01.444 09:06:40 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:01.444 09:06:40 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:01.444 09:06:40 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:01.444 09:06:40 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:01.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:01.964 Waiting for block devices as requested 00:13:01.964 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:01.964 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.223 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.223 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:07.509 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:07.509 nvme0n1 00:13:07.509 nvme1n1 00:13:07.509 nvme2n1 00:13:07.509 nvme2n2 00:13:07.509 nvme2n3 00:13:07.509 nvme3n1 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.509 09:06:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.509 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "884ab7ac-ba56-473e-b079-5bba63d3fc2e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "884ab7ac-ba56-473e-b079-5bba63d3fc2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cebdcb6d-15f2-46c4-8381-633864df9c79"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cebdcb6d-15f2-46c4-8381-633864df9c79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "414535d7-4c3a-4cf5-8e37-cc94408b84bf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "414535d7-4c3a-4cf5-8e37-cc94408b84bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "b2fbafaf-8565-4376-a777-af17d1148968"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2fbafaf-8565-4376-a777-af17d1148968",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a05080f3-d274-4511-9f12-2830182f75ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a05080f3-d274-4511-9f12-2830182f75ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ed16c376-7b4d-4892-9b0c-c96976f12849"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ed16c376-7b4d-4892-9b0c-c96976f12849",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:07.510 09:06:46 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69360 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 69360 ']' 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 69360 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69360 00:13:07.510 killing process with pid 69360 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69360' 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 69360 00:13:07.510 09:06:46 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 69360 00:13:08.931 09:06:47 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:08.931 09:06:47 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:08.931 09:06:47 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.931 09:06:47 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.931 09:06:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:08.931 ************************************ 00:13:08.931 START TEST bdev_hello_world 00:13:08.931 ************************************ 00:13:08.931 09:06:47 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:08.931 [2024-11-20 09:06:47.682058] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:08.931 [2024-11-20 09:06:47.682715] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69720 ] 00:13:08.931 [2024-11-20 09:06:47.838738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.192 [2024-11-20 09:06:47.941777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.454 [2024-11-20 09:06:48.270445] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:09.454 [2024-11-20 09:06:48.270500] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:09.454 [2024-11-20 09:06:48.270517] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:09.454 [2024-11-20 09:06:48.272423] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:09.454 [2024-11-20 09:06:48.273311] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:09.454 [2024-11-20 09:06:48.273342] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:09.454 [2024-11-20 09:06:48.273697] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:09.454 00:13:09.454 [2024-11-20 09:06:48.273712] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:10.025 00:13:10.025 real 0m1.240s 00:13:10.025 user 0m0.917s 00:13:10.025 sys 0m0.186s 00:13:10.025 ************************************ 00:13:10.025 END TEST bdev_hello_world 00:13:10.025 ************************************ 00:13:10.025 09:06:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.025 09:06:48 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:10.025 09:06:48 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:10.025 09:06:48 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:10.025 09:06:48 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.025 09:06:48 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.025 ************************************ 00:13:10.025 START TEST bdev_bounds 00:13:10.025 ************************************ 00:13:10.025 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:13:10.025 Process bdevio pid: 69757 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69757 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69757' 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69757 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 69757 ']' 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:10.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:10.026 09:06:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:10.286 [2024-11-20 09:06:48.986573] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:10.286 [2024-11-20 09:06:48.986695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69757 ] 00:13:10.286 [2024-11-20 09:06:49.134178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:10.545 [2024-11-20 09:06:49.233972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.545 [2024-11-20 09:06:49.234277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.545 [2024-11-20 09:06:49.234338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.114 09:06:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.114 09:06:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:13:11.114 09:06:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:11.114 I/O targets: 00:13:11.114 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:11.114 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:11.114 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:11.114 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:11.114 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:11.114 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:11.114 00:13:11.114 00:13:11.114 CUnit - A unit testing framework for C - Version 2.1-3 00:13:11.114 http://cunit.sourceforge.net/ 00:13:11.114 00:13:11.114 00:13:11.114 Suite: bdevio tests on: nvme3n1 00:13:11.115 Test: blockdev write read block ...passed 00:13:11.115 Test: blockdev write zeroes read block ...passed 00:13:11.115 Test: blockdev write zeroes read no split ...passed 00:13:11.115 Test: blockdev write zeroes read split ...passed 00:13:11.115 Test: blockdev write zeroes read split partial ...passed 00:13:11.115 Test: blockdev reset ...passed 00:13:11.115 Test: blockdev write read 8 blocks ...passed 00:13:11.115 Test: blockdev write read size > 128k ...passed 00:13:11.115 Test: blockdev write read invalid size ...passed 00:13:11.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:11.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:11.115 Test: blockdev write read max offset ...passed 00:13:11.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:11.115 Test: blockdev writev readv 8 blocks ...passed 00:13:11.115 Test: blockdev writev readv 30 x 1block ...passed 00:13:11.115 Test: blockdev writev readv block ...passed 00:13:11.115 Test: blockdev writev readv size > 128k ...passed 00:13:11.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:11.115 Test: blockdev comparev and writev ...passed 00:13:11.115 Test: blockdev nvme passthru rw ...passed 00:13:11.115 Test: blockdev nvme passthru vendor specific ...passed 00:13:11.115 Test: blockdev nvme admin passthru ...passed 00:13:11.115 Test: blockdev copy ...passed 00:13:11.115 Suite: bdevio tests on: nvme2n3 00:13:11.115 Test: blockdev write read block ...passed 00:13:11.115 Test: blockdev write zeroes read block ...passed 00:13:11.115 Test: blockdev write zeroes read no split ...passed 00:13:11.115 Test: blockdev write zeroes read split ...passed 00:13:11.115 Test: blockdev write zeroes read split partial ...passed 00:13:11.115 Test: blockdev reset ...passed 00:13:11.115 Test: blockdev write read 8 blocks ...passed 00:13:11.115 Test: blockdev write read size > 128k ...passed 00:13:11.115 Test: blockdev write read invalid size ...passed 00:13:11.115 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:11.115 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:11.115 Test: blockdev write read max offset ...passed 00:13:11.115 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:11.115 Test: blockdev writev readv 8 blocks ...passed 00:13:11.115 Test: blockdev writev readv 30 x 1block ...passed 00:13:11.115 Test: blockdev writev readv block ...passed 00:13:11.115 Test: blockdev writev readv size > 128k ...passed 00:13:11.115 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:11.115 Test: blockdev comparev and writev ...passed 00:13:11.115 Test: blockdev nvme passthru rw ...passed 00:13:11.115 Test: blockdev nvme passthru vendor specific ...passed 00:13:11.115 Test: blockdev nvme admin passthru ...passed 00:13:11.115 Test: blockdev copy ...passed 00:13:11.115 Suite: bdevio tests on: nvme2n2 00:13:11.115 Test: blockdev write read block ...passed 00:13:11.115 Test: blockdev write zeroes read block ...passed 00:13:11.115 Test: blockdev write zeroes read no split ...passed 00:13:11.377 Test: blockdev write zeroes read split ...passed 00:13:11.377 Test: blockdev write zeroes read split partial ...passed 00:13:11.377 Test: blockdev reset ...passed 00:13:11.377 Test: blockdev write read 8 blocks ...passed 00:13:11.377 Test: blockdev write read size > 128k ...passed 00:13:11.377 Test: blockdev write read invalid size ...passed 00:13:11.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:11.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:11.377 Test: blockdev write read max offset ...passed 00:13:11.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:11.377 Test: blockdev writev readv 8 blocks ...passed 00:13:11.377 Test: blockdev writev readv 30 x 1block ...passed 00:13:11.377 Test: blockdev writev readv block ...passed 00:13:11.377 Test: blockdev writev readv size > 128k ...passed 00:13:11.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:11.377 Test: blockdev comparev and writev ...passed 00:13:11.377 Test: blockdev nvme passthru rw ...passed 00:13:11.377 Test: blockdev nvme passthru vendor specific ...passed 00:13:11.377 Test: blockdev nvme admin passthru ...passed 00:13:11.377 Test: blockdev copy ...passed 00:13:11.377 Suite: bdevio tests on: nvme2n1 00:13:11.377 Test: blockdev write read block ...passed 00:13:11.377 Test: blockdev write zeroes read block ...passed 00:13:11.377 Test: blockdev write zeroes read no split ...passed 00:13:11.377 Test: blockdev write zeroes read split ...passed 00:13:11.377 Test: blockdev write zeroes read split partial ...passed 00:13:11.377 Test: blockdev reset ...passed 00:13:11.377 Test: blockdev write read 8 blocks ...passed 00:13:11.377 Test: blockdev write read size > 128k ...passed 00:13:11.377 Test: blockdev write read invalid size ...passed 00:13:11.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:11.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:11.377 Test: blockdev write read max offset ...passed 00:13:11.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:11.377 Test: blockdev writev readv 8 blocks ...passed 00:13:11.377 Test: blockdev writev readv 30 x 1block ...passed 00:13:11.377 Test: blockdev writev readv block ...passed 00:13:11.377 Test: blockdev writev readv size > 128k ...passed 00:13:11.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:11.377 Test: blockdev comparev and writev ...passed 00:13:11.377 Test: blockdev nvme passthru rw ...passed 00:13:11.377 Test: blockdev nvme passthru vendor specific ...passed 00:13:11.377 Test: blockdev nvme admin passthru ...passed 00:13:11.377 Test: blockdev copy ...passed 00:13:11.377 Suite: bdevio tests on: nvme1n1 00:13:11.377 Test: blockdev write read block ...passed 00:13:11.377 Test: blockdev write zeroes read block ...passed 00:13:11.377 Test: blockdev write zeroes read no split ...passed 00:13:11.377 Test: blockdev write zeroes read split ...passed 00:13:11.377 Test: blockdev write zeroes read split partial ...passed 00:13:11.377 Test: blockdev reset ...passed 00:13:11.377 Test: blockdev write read 8 blocks ...passed 00:13:11.377 Test: blockdev write read size > 128k ...passed 00:13:11.377 Test: blockdev write read invalid size ...passed 00:13:11.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:11.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:11.377 Test: blockdev write read max offset ...passed 00:13:11.377 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:11.377 Test: blockdev writev readv 8 blocks ...passed 00:13:11.377 Test: blockdev writev readv 30 x 1block ...passed 00:13:11.377 Test: blockdev writev readv block ...passed 00:13:11.377 Test: blockdev writev readv size > 128k ...passed 00:13:11.377 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:11.377 Test: blockdev comparev and writev ...passed 00:13:11.377 Test: blockdev nvme passthru rw ...passed 00:13:11.377 Test: blockdev nvme passthru vendor specific ...passed 00:13:11.377 Test: blockdev nvme admin passthru ...passed 00:13:11.377 Test: blockdev copy ...passed 00:13:11.377 Suite: bdevio tests on: nvme0n1 00:13:11.377 Test: blockdev write read block ...passed 00:13:11.377 Test: blockdev write zeroes read block ...passed 00:13:11.377 Test: blockdev write zeroes read no split ...passed 00:13:11.377 Test: blockdev write zeroes read split ...passed 00:13:11.377 Test: blockdev write zeroes read split partial ...passed 00:13:11.377 Test: blockdev reset ...passed 00:13:11.377 Test: blockdev write read 8 blocks ...passed 00:13:11.377 Test: blockdev write read size > 128k ...passed 00:13:11.377 Test: blockdev write read invalid size ...passed 00:13:11.377 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:11.377 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:11.377 Test: blockdev write read max offset ...passed 00:13:11.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:11.639 Test: blockdev writev readv 8 blocks ...passed 00:13:11.639 Test: blockdev writev readv 30 x 1block ...passed 00:13:11.639 Test: blockdev writev readv block ...passed 00:13:11.639 Test: blockdev writev readv size > 128k ...passed 00:13:11.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:11.639 Test: blockdev comparev and writev ...passed 00:13:11.639 Test: blockdev nvme passthru rw ...passed 00:13:11.639 Test: blockdev nvme passthru vendor specific ...passed 00:13:11.639 Test: blockdev nvme admin passthru ...passed 00:13:11.639 Test: blockdev copy ...passed 00:13:11.639 00:13:11.639 Run Summary: Type Total Ran Passed Failed Inactive 00:13:11.639 suites 6 6 n/a 0 0 00:13:11.639 tests 138 138 138 0 0 00:13:11.639 asserts 780 780 780 0 n/a 00:13:11.639 00:13:11.639 Elapsed time = 1.104 seconds 00:13:11.639 0 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69757 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 69757 ']' 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 69757 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69757 00:13:11.639 killing process with pid 69757 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69757' 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 69757 00:13:11.639 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 69757 00:13:12.210 09:06:50 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:12.210 00:13:12.210 real 0m2.018s 00:13:12.210 user 0m4.939s 00:13:12.210 sys 0m0.305s 00:13:12.210 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.210 09:06:50 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 ************************************ 00:13:12.210 END TEST bdev_bounds 00:13:12.210 ************************************ 00:13:12.210 09:06:50 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:12.210 09:06:50 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:12.210 09:06:50 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.210 09:06:50 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 ************************************ 00:13:12.210 START TEST bdev_nbd 00:13:12.210 ************************************ 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69811 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69811 /var/tmp/spdk-nbd.sock 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 69811 ']' 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:12.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.210 09:06:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:12.210 [2024-11-20 09:06:51.061558] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:12.210 [2024-11-20 09:06:51.061843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.471 [2024-11-20 09:06:51.217829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.471 [2024-11-20 09:06:51.321425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.066 09:06:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:13.066 09:06:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.067 09:06:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.328 1+0 records in 00:13:13.328 1+0 records out 00:13:13.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995693 s, 4.1 MB/s 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.328 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.589 1+0 records in 00:13:13.589 1+0 records out 00:13:13.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116535 s, 3.5 MB/s 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.589 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:13.850 1+0 records in 00:13:13.850 1+0 records out 00:13:13.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000931291 s, 4.4 MB/s 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:13.850 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:13.851 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:14.112 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.113 1+0 records in 00:13:14.113 1+0 records out 00:13:14.113 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000800249 s, 5.1 MB/s 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.113 09:06:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.373 1+0 records in 00:13:14.373 1+0 records out 00:13:14.373 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111666 s, 3.7 MB/s 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:14.373 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.374 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:14.374 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:14.374 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.374 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.374 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:14.634 1+0 records in 00:13:14.634 1+0 records out 00:13:14.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665066 s, 6.2 MB/s 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd0", 00:13:14.634 "bdev_name": "nvme0n1" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd1", 00:13:14.634 "bdev_name": "nvme1n1" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd2", 00:13:14.634 "bdev_name": "nvme2n1" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd3", 00:13:14.634 "bdev_name": "nvme2n2" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd4", 00:13:14.634 "bdev_name": "nvme2n3" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd5", 00:13:14.634 "bdev_name": "nvme3n1" 00:13:14.634 } 00:13:14.634 ]' 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd0", 00:13:14.634 "bdev_name": "nvme0n1" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd1", 00:13:14.634 "bdev_name": "nvme1n1" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd2", 00:13:14.634 "bdev_name": "nvme2n1" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd3", 00:13:14.634 "bdev_name": "nvme2n2" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd4", 00:13:14.634 "bdev_name": "nvme2n3" 00:13:14.634 }, 00:13:14.634 { 00:13:14.634 "nbd_device": "/dev/nbd5", 00:13:14.634 "bdev_name": "nvme3n1" 00:13:14.634 } 00:13:14.634 ]' 00:13:14.634 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:14.894 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:15.154 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:15.154 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:15.154 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:15.154 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.154 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.154 09:06:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:15.154 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.154 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.154 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.154 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.413 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:15.672 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:15.672 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:15.672 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.673 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.931 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.190 09:06:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:16.190 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:16.449 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:16.450 /dev/nbd0 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.450 1+0 records in 00:13:16.450 1+0 records out 00:13:16.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550499 s, 7.4 MB/s 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:16.450 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:16.709 /dev/nbd1 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.709 1+0 records in 00:13:16.709 1+0 records out 00:13:16.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570592 s, 7.2 MB/s 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:16.709 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:16.967 /dev/nbd10 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.967 1+0 records in 00:13:16.967 1+0 records out 00:13:16.967 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533947 s, 7.7 MB/s 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:16.967 09:06:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:17.226 /dev/nbd11 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.226 1+0 records in 00:13:17.226 1+0 records out 00:13:17.226 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480419 s, 8.5 MB/s 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.226 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:17.484 /dev/nbd12 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.484 1+0 records in 00:13:17.484 1+0 records out 00:13:17.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575982 s, 7.1 MB/s 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.484 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:17.742 /dev/nbd13 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.742 1+0 records in 00:13:17.742 1+0 records out 00:13:17.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494076 s, 8.3 MB/s 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:17.742 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd0", 00:13:18.002 "bdev_name": "nvme0n1" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd1", 00:13:18.002 "bdev_name": "nvme1n1" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd10", 00:13:18.002 "bdev_name": "nvme2n1" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd11", 00:13:18.002 "bdev_name": "nvme2n2" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd12", 00:13:18.002 "bdev_name": "nvme2n3" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd13", 00:13:18.002 "bdev_name": "nvme3n1" 00:13:18.002 } 00:13:18.002 ]' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd0", 00:13:18.002 "bdev_name": "nvme0n1" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd1", 00:13:18.002 "bdev_name": "nvme1n1" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd10", 00:13:18.002 "bdev_name": "nvme2n1" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd11", 00:13:18.002 "bdev_name": "nvme2n2" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd12", 00:13:18.002 "bdev_name": "nvme2n3" 00:13:18.002 }, 00:13:18.002 { 00:13:18.002 "nbd_device": "/dev/nbd13", 00:13:18.002 "bdev_name": "nvme3n1" 00:13:18.002 } 00:13:18.002 ]' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:18.002 /dev/nbd1 00:13:18.002 /dev/nbd10 00:13:18.002 /dev/nbd11 00:13:18.002 /dev/nbd12 00:13:18.002 /dev/nbd13' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:18.002 /dev/nbd1 00:13:18.002 /dev/nbd10 00:13:18.002 /dev/nbd11 00:13:18.002 /dev/nbd12 00:13:18.002 /dev/nbd13' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:18.002 256+0 records in 00:13:18.002 256+0 records out 00:13:18.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068995 s, 152 MB/s 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:18.002 256+0 records in 00:13:18.002 256+0 records out 00:13:18.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0602885 s, 17.4 MB/s 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:18.002 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:18.261 256+0 records in 00:13:18.261 256+0 records out 00:13:18.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116593 s, 9.0 MB/s 00:13:18.261 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:18.261 09:06:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:18.261 256+0 records in 00:13:18.261 256+0 records out 00:13:18.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0795753 s, 13.2 MB/s 00:13:18.261 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:18.261 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:18.261 256+0 records in 00:13:18.261 256+0 records out 00:13:18.261 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0711739 s, 14.7 MB/s 00:13:18.261 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:18.261 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:18.518 256+0 records in 00:13:18.518 256+0 records out 00:13:18.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0660572 s, 15.9 MB/s 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:18.518 256+0 records in 00:13:18.518 256+0 records out 00:13:18.518 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0712249 s, 14.7 MB/s 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.518 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.775 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.032 09:06:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.289 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.547 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:19.805 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:20.063 09:06:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:20.321 malloc_lvol_verify 00:13:20.321 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:20.321 0851756f-7fd5-4cc6-b327-0fc983e73af1 00:13:20.321 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:20.579 5418878e-1319-4aba-8566-399e240aeebe 00:13:20.579 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:20.839 /dev/nbd0 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:20.839 mke2fs 1.47.0 (5-Feb-2023) 00:13:20.839 Discarding device blocks: 0/4096 done 00:13:20.839 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:20.839 00:13:20.839 Allocating group tables: 0/1 done 00:13:20.839 Writing inode tables: 0/1 done 00:13:20.839 Creating journal (1024 blocks): done 00:13:20.839 Writing superblocks and filesystem accounting information: 0/1 done 00:13:20.839 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:20.839 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69811 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 69811 ']' 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 69811 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69811 00:13:21.098 killing process with pid 69811 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69811' 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 69811 00:13:21.098 09:06:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 69811 00:13:21.664 09:07:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:21.664 00:13:21.664 real 0m9.516s 00:13:21.664 user 0m13.469s 00:13:21.664 sys 0m3.238s 00:13:21.664 09:07:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:21.664 09:07:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:21.664 ************************************ 00:13:21.664 END TEST bdev_nbd 00:13:21.664 ************************************ 00:13:21.664 09:07:00 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:21.664 09:07:00 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:21.664 09:07:00 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:21.664 09:07:00 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:21.664 09:07:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:21.664 09:07:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.664 09:07:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.664 ************************************ 00:13:21.664 START TEST bdev_fio 00:13:21.664 ************************************ 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:21.664 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:13:21.664 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:13:21.928 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:21.929 ************************************ 00:13:21.929 START TEST bdev_fio_rw_verify 00:13:21.929 ************************************ 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:21.929 09:07:00 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:21.929 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:21.929 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:21.929 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:21.929 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:21.929 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:21.929 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:21.929 fio-3.35 00:13:21.929 Starting 6 threads 00:13:34.125 00:13:34.126 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70208: Wed Nov 20 09:07:11 2024 00:13:34.126 read: IOPS=42.0k, BW=164MiB/s (172MB/s)(1641MiB/10002msec) 00:13:34.126 slat (usec): min=2, max=954, avg= 4.50, stdev= 3.35 00:13:34.126 clat (usec): min=49, max=8934, avg=369.77, stdev=201.05 00:13:34.126 lat (usec): min=53, max=8939, avg=374.26, stdev=201.48 00:13:34.126 clat percentiles (usec): 00:13:34.126 | 50.000th=[ 343], 99.000th=[ 938], 99.900th=[ 1532], 99.990th=[ 3785], 00:13:34.126 | 99.999th=[ 8979] 00:13:34.126 write: IOPS=42.4k, BW=166MiB/s (174MB/s)(1657MiB/10002msec); 0 zone resets 00:13:34.126 slat (usec): min=10, max=2305, avg=21.42, stdev=30.91 00:13:34.126 clat (usec): min=45, max=6315, avg=564.12, stdev=415.00 00:13:34.126 lat (usec): min=70, max=6329, avg=585.54, stdev=416.66 00:13:34.126 clat percentiles (usec): 00:13:34.126 | 50.000th=[ 486], 99.000th=[ 2737], 99.900th=[ 3818], 99.990th=[ 4621], 00:13:34.126 | 99.999th=[ 5604] 00:13:34.126 bw ( KiB/s): min=139403, max=195689, per=100.00%, avg=170271.32, stdev=4219.82, samples=114 00:13:34.126 iops : min=34850, max=48922, avg=42566.84, stdev=1054.98, samples=114 00:13:34.126 lat (usec) : 50=0.01%, 100=0.13%, 250=18.50%, 500=47.57%, 750=24.73% 00:13:34.126 lat (usec) : 1000=6.04% 00:13:34.126 lat (msec) : 2=1.87%, 4=1.13%, 10=0.03% 00:13:34.126 cpu : usr=53.81%, sys=27.77%, ctx=10916, majf=0, minf=33681 00:13:34.126 IO depths : 1=11.6%, 2=23.6%, 4=50.7%, 8=14.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:34.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.126 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:34.126 issued rwts: total=420122,424225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:34.126 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:34.126 00:13:34.126 Run status group 0 (all jobs): 00:13:34.126 READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=1641MiB (1721MB), run=10002-10002msec 00:13:34.126 WRITE: bw=166MiB/s (174MB/s), 166MiB/s-166MiB/s (174MB/s-174MB/s), io=1657MiB (1738MB), run=10002-10002msec 00:13:34.126 ----------------------------------------------------- 00:13:34.126 Suppressions used: 00:13:34.126 count bytes template 00:13:34.126 6 48 /usr/src/fio/parse.c 00:13:34.126 3797 364512 /usr/src/fio/iolog.c 00:13:34.126 1 8 libtcmalloc_minimal.so 00:13:34.126 1 904 libcrypto.so 00:13:34.126 ----------------------------------------------------- 00:13:34.126 00:13:34.126 00:13:34.126 real 0m11.854s 00:13:34.126 user 0m33.679s 00:13:34.126 sys 0m16.980s 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 ************************************ 00:13:34.126 END TEST bdev_fio_rw_verify 00:13:34.126 ************************************ 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "884ab7ac-ba56-473e-b079-5bba63d3fc2e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "884ab7ac-ba56-473e-b079-5bba63d3fc2e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "cebdcb6d-15f2-46c4-8381-633864df9c79"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "cebdcb6d-15f2-46c4-8381-633864df9c79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "414535d7-4c3a-4cf5-8e37-cc94408b84bf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "414535d7-4c3a-4cf5-8e37-cc94408b84bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "b2fbafaf-8565-4376-a777-af17d1148968"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b2fbafaf-8565-4376-a777-af17d1148968",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "a05080f3-d274-4511-9f12-2830182f75ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a05080f3-d274-4511-9f12-2830182f75ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "ed16c376-7b4d-4892-9b0c-c96976f12849"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ed16c376-7b4d-4892-9b0c-c96976f12849",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:34.126 /home/vagrant/spdk_repo/spdk 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:34.126 00:13:34.126 real 0m11.995s 00:13:34.126 user 0m33.754s 00:13:34.126 sys 0m17.045s 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.126 09:07:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 ************************************ 00:13:34.126 END TEST bdev_fio 00:13:34.126 ************************************ 00:13:34.126 09:07:12 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:34.126 09:07:12 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:34.126 09:07:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:34.126 09:07:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.126 09:07:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:34.126 ************************************ 00:13:34.126 START TEST bdev_verify 00:13:34.126 ************************************ 00:13:34.127 09:07:12 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:34.127 [2024-11-20 09:07:12.644107] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:34.127 [2024-11-20 09:07:12.644219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70377 ] 00:13:34.127 [2024-11-20 09:07:12.803024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:34.127 [2024-11-20 09:07:12.898613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.127 [2024-11-20 09:07:12.898716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.385 Running I/O for 5 seconds... 00:13:36.743 24256.00 IOPS, 94.75 MiB/s [2024-11-20T09:07:16.593Z] 24432.00 IOPS, 95.44 MiB/s [2024-11-20T09:07:17.525Z] 23904.00 IOPS, 93.38 MiB/s [2024-11-20T09:07:18.458Z] 23424.00 IOPS, 91.50 MiB/s [2024-11-20T09:07:18.458Z] 23347.20 IOPS, 91.20 MiB/s 00:13:39.539 Latency(us) 00:13:39.539 [2024-11-20T09:07:18.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:39.539 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x0 length 0xa0000 00:13:39.539 nvme0n1 : 5.03 1756.02 6.86 0.00 0.00 72732.37 15224.52 68560.74 00:13:39.539 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0xa0000 length 0xa0000 00:13:39.539 nvme0n1 : 5.02 1630.54 6.37 0.00 0.00 78327.52 13308.85 79046.50 00:13:39.539 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x0 length 0xbd0bd 00:13:39.539 nvme1n1 : 5.06 3240.34 12.66 0.00 0.00 39206.10 4133.81 64124.46 00:13:39.539 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:39.539 nvme1n1 : 5.06 2885.81 11.27 0.00 0.00 43989.71 4234.63 60898.07 00:13:39.539 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x0 length 0x80000 00:13:39.539 nvme2n1 : 5.06 1745.53 6.82 0.00 0.00 72813.33 8721.33 72593.72 00:13:39.539 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x80000 length 0x80000 00:13:39.539 nvme2n1 : 5.07 1590.35 6.21 0.00 0.00 79778.08 8418.86 80659.69 00:13:39.539 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x0 length 0x80000 00:13:39.539 nvme2n2 : 5.07 1767.61 6.90 0.00 0.00 71814.75 3478.45 59284.87 00:13:39.539 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x80000 length 0x80000 00:13:39.539 nvme2n2 : 5.07 1639.57 6.40 0.00 0.00 77176.26 11796.48 71383.83 00:13:39.539 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.539 Verification LBA range: start 0x0 length 0x80000 00:13:39.539 nvme2n3 : 5.06 1744.96 6.82 0.00 0.00 72568.27 14720.39 63317.86 00:13:39.540 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.540 Verification LBA range: start 0x80000 length 0x80000 00:13:39.540 nvme2n3 : 5.08 1639.13 6.40 0.00 0.00 77009.15 10939.47 73803.62 00:13:39.540 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:39.540 Verification LBA range: start 0x0 length 0x20000 00:13:39.540 nvme3n1 : 5.07 1766.97 6.90 0.00 0.00 71487.86 4411.08 72190.42 00:13:39.540 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:39.540 Verification LBA range: start 0x20000 length 0x20000 00:13:39.540 nvme3n1 : 5.08 1661.93 6.49 0.00 0.00 75816.28 3932.16 75820.11 00:13:39.540 [2024-11-20T09:07:18.459Z] =================================================================================================================== 00:13:39.540 [2024-11-20T09:07:18.459Z] Total : 23068.75 90.11 0.00 0.00 65985.92 3478.45 80659.69 00:13:40.473 00:13:40.473 real 0m6.567s 00:13:40.473 user 0m10.578s 00:13:40.473 sys 0m1.634s 00:13:40.473 09:07:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.473 ************************************ 00:13:40.473 END TEST bdev_verify 00:13:40.473 ************************************ 00:13:40.473 09:07:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:40.473 09:07:19 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:40.473 09:07:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:13:40.473 09:07:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.473 09:07:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.473 ************************************ 00:13:40.473 START TEST bdev_verify_big_io 00:13:40.473 ************************************ 00:13:40.473 09:07:19 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:40.473 [2024-11-20 09:07:19.259222] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:40.473 [2024-11-20 09:07:19.259340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70471 ] 00:13:40.732 [2024-11-20 09:07:19.418648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.732 [2024-11-20 09:07:19.536894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.732 [2024-11-20 09:07:19.536929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.298 Running I/O for 5 seconds... 00:13:47.168 624.00 IOPS, 39.00 MiB/s [2024-11-20T09:07:26.087Z] 2422.50 IOPS, 151.41 MiB/s [2024-11-20T09:07:26.652Z] 2621.67 IOPS, 163.85 MiB/s 00:13:47.733 Latency(us) 00:13:47.733 [2024-11-20T09:07:26.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:47.733 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x0 length 0xa000 00:13:47.733 nvme0n1 : 6.24 76.94 4.81 0.00 0.00 1588433.00 296827.67 2684354.56 00:13:47.733 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0xa000 length 0xa000 00:13:47.733 nvme0n1 : 5.87 133.62 8.35 0.00 0.00 888672.28 34885.32 1167952.34 00:13:47.733 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x0 length 0xbd0b 00:13:47.733 nvme1n1 : 6.04 141.78 8.86 0.00 0.00 838728.76 31053.98 1019538.51 00:13:47.733 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:47.733 nvme1n1 : 5.96 115.48 7.22 0.00 0.00 1027720.55 168578.76 1819682.66 00:13:47.733 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x0 length 0x8000 00:13:47.733 nvme2n1 : 6.04 105.97 6.62 0.00 0.00 1078508.31 136314.88 1297007.85 00:13:47.733 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x8000 length 0x8000 00:13:47.733 nvme2n1 : 5.96 115.44 7.22 0.00 0.00 994179.36 74206.92 1142141.24 00:13:47.733 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x0 length 0x8000 00:13:47.733 nvme2n2 : 6.23 102.72 6.42 0.00 0.00 1083228.79 169385.35 2232660.28 00:13:47.733 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x8000 length 0x8000 00:13:47.733 nvme2n2 : 6.04 113.99 7.12 0.00 0.00 987550.73 73803.62 1561571.64 00:13:47.733 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x0 length 0x8000 00:13:47.733 nvme2n3 : 6.23 110.38 6.90 0.00 0.00 971014.61 84692.68 1251838.42 00:13:47.733 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x8000 length 0x8000 00:13:47.733 nvme2n3 : 6.23 118.14 7.38 0.00 0.00 905397.37 58881.58 1619646.62 00:13:47.733 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x0 length 0x2000 00:13:47.733 nvme3n1 : 6.37 129.54 8.10 0.00 0.00 803908.17 146.51 1600288.30 00:13:47.733 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:47.733 Verification LBA range: start 0x2000 length 0x2000 00:13:47.733 nvme3n1 : 6.37 118.11 7.38 0.00 0.00 871583.79 98.07 1400252.26 00:13:47.733 [2024-11-20T09:07:26.652Z] =================================================================================================================== 00:13:47.733 [2024-11-20T09:07:26.652Z] Total : 1382.12 86.38 0.00 0.00 978643.62 98.07 2684354.56 00:13:48.668 00:13:48.668 real 0m8.057s 00:13:48.668 user 0m14.817s 00:13:48.668 sys 0m0.513s 00:13:48.668 09:07:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:48.668 ************************************ 00:13:48.668 END TEST bdev_verify_big_io 00:13:48.668 ************************************ 00:13:48.668 09:07:27 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.668 09:07:27 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.668 09:07:27 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:48.668 09:07:27 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:48.668 09:07:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:48.668 ************************************ 00:13:48.668 START TEST bdev_write_zeroes 00:13:48.668 ************************************ 00:13:48.668 09:07:27 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:48.668 [2024-11-20 09:07:27.354897] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:48.668 [2024-11-20 09:07:27.355017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70585 ] 00:13:48.668 [2024-11-20 09:07:27.513645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.926 [2024-11-20 09:07:27.609696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.183 Running I/O for 1 seconds... 00:13:50.116 81088.00 IOPS, 316.75 MiB/s 00:13:50.116 Latency(us) 00:13:50.116 [2024-11-20T09:07:29.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.116 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.116 nvme0n1 : 1.02 11690.40 45.67 0.00 0.00 10936.68 7208.96 19055.85 00:13:50.116 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.116 nvme1n1 : 1.02 22662.53 88.53 0.00 0.00 5635.43 3226.39 16938.54 00:13:50.116 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.116 nvme2n1 : 1.02 11676.80 45.61 0.00 0.00 10902.86 6175.51 18652.55 00:13:50.116 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.116 nvme2n2 : 1.02 11663.69 45.56 0.00 0.00 10907.25 6276.33 18652.55 00:13:50.116 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.116 nvme2n3 : 1.02 11650.62 45.51 0.00 0.00 10912.84 6377.16 18652.55 00:13:50.116 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:50.116 nvme3n1 : 1.02 11637.51 45.46 0.00 0.00 10921.01 6503.19 18652.55 00:13:50.116 [2024-11-20T09:07:29.035Z] =================================================================================================================== 00:13:50.116 [2024-11-20T09:07:29.035Z] Total : 80981.57 316.33 0.00 0.00 9442.45 3226.39 19055.85 00:13:51.049 00:13:51.049 real 0m2.403s 00:13:51.049 user 0m1.677s 00:13:51.049 sys 0m0.577s 00:13:51.049 09:07:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.049 09:07:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:51.049 ************************************ 00:13:51.049 END TEST bdev_write_zeroes 00:13:51.049 ************************************ 00:13:51.049 09:07:29 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.050 09:07:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:51.050 09:07:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.050 09:07:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.050 ************************************ 00:13:51.050 START TEST bdev_json_nonenclosed 00:13:51.050 ************************************ 00:13:51.050 09:07:29 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.050 [2024-11-20 09:07:29.802927] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:51.050 [2024-11-20 09:07:29.803042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70636 ] 00:13:51.050 [2024-11-20 09:07:29.962258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.307 [2024-11-20 09:07:30.060967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.307 [2024-11-20 09:07:30.061054] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:51.307 [2024-11-20 09:07:30.061072] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:51.307 [2024-11-20 09:07:30.061081] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.566 00:13:51.566 real 0m0.496s 00:13:51.566 user 0m0.305s 00:13:51.566 sys 0m0.088s 00:13:51.566 09:07:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.566 09:07:30 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:51.566 ************************************ 00:13:51.566 END TEST bdev_json_nonenclosed 00:13:51.566 ************************************ 00:13:51.566 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.566 09:07:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:13:51.566 09:07:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.566 09:07:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.566 ************************************ 00:13:51.566 START TEST bdev_json_nonarray 00:13:51.566 ************************************ 00:13:51.566 09:07:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:51.566 [2024-11-20 09:07:30.339565] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:13:51.566 [2024-11-20 09:07:30.339674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70656 ] 00:13:51.839 [2024-11-20 09:07:30.500745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.839 [2024-11-20 09:07:30.598882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.839 [2024-11-20 09:07:30.598973] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:51.839 [2024-11-20 09:07:30.598990] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:51.839 [2024-11-20 09:07:30.598999] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:52.108 00:13:52.108 real 0m0.496s 00:13:52.108 user 0m0.301s 00:13:52.108 sys 0m0.092s 00:13:52.108 09:07:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.108 09:07:30 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:52.108 ************************************ 00:13:52.108 END TEST bdev_json_nonarray 00:13:52.108 ************************************ 00:13:52.108 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:52.109 09:07:30 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:52.367 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:24.433 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:24.433 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:24.433 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:28.616 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:28.875 00:14:28.875 real 1m28.396s 00:14:28.875 user 1m31.394s 00:14:28.875 sys 1m21.922s 00:14:28.875 09:08:07 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.875 ************************************ 00:14:28.875 END TEST blockdev_xnvme 00:14:28.875 ************************************ 00:14:28.875 09:08:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.875 09:08:07 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:28.875 09:08:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.876 09:08:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.876 09:08:07 -- common/autotest_common.sh@10 -- # set +x 00:14:28.876 ************************************ 00:14:28.876 START TEST ublk 00:14:28.876 ************************************ 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:28.876 * Looking for test storage... 00:14:28.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.876 09:08:07 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.876 09:08:07 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.876 09:08:07 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.876 09:08:07 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.876 09:08:07 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.876 09:08:07 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:28.876 09:08:07 ublk -- scripts/common.sh@345 -- # : 1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.876 09:08:07 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.876 09:08:07 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@353 -- # local d=1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.876 09:08:07 ublk -- scripts/common.sh@355 -- # echo 1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.876 09:08:07 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@353 -- # local d=2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.876 09:08:07 ublk -- scripts/common.sh@355 -- # echo 2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.876 09:08:07 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.876 09:08:07 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.876 09:08:07 ublk -- scripts/common.sh@368 -- # return 0 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.876 --rc genhtml_branch_coverage=1 00:14:28.876 --rc genhtml_function_coverage=1 00:14:28.876 --rc genhtml_legend=1 00:14:28.876 --rc geninfo_all_blocks=1 00:14:28.876 --rc geninfo_unexecuted_blocks=1 00:14:28.876 00:14:28.876 ' 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.876 --rc genhtml_branch_coverage=1 00:14:28.876 --rc genhtml_function_coverage=1 00:14:28.876 --rc genhtml_legend=1 00:14:28.876 --rc geninfo_all_blocks=1 00:14:28.876 --rc geninfo_unexecuted_blocks=1 00:14:28.876 00:14:28.876 ' 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.876 --rc genhtml_branch_coverage=1 00:14:28.876 --rc genhtml_function_coverage=1 00:14:28.876 --rc genhtml_legend=1 00:14:28.876 --rc geninfo_all_blocks=1 00:14:28.876 --rc geninfo_unexecuted_blocks=1 00:14:28.876 00:14:28.876 ' 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:28.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.876 --rc genhtml_branch_coverage=1 00:14:28.876 --rc genhtml_function_coverage=1 00:14:28.876 --rc genhtml_legend=1 00:14:28.876 --rc geninfo_all_blocks=1 00:14:28.876 --rc geninfo_unexecuted_blocks=1 00:14:28.876 00:14:28.876 ' 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:28.876 09:08:07 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:28.876 09:08:07 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:28.876 09:08:07 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:28.876 09:08:07 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:28.876 09:08:07 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:28.876 09:08:07 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:28.876 09:08:07 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:28.876 09:08:07 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:28.876 09:08:07 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.876 09:08:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.876 ************************************ 00:14:28.876 START TEST test_save_ublk_config 00:14:28.876 ************************************ 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70960 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70960 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 70960 ']' 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.876 09:08:07 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:29.135 [2024-11-20 09:08:07.818465] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:14:29.135 [2024-11-20 09:08:07.818599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70960 ] 00:14:29.135 [2024-11-20 09:08:07.977333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.393 [2024-11-20 09:08:08.070394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:29.960 [2024-11-20 09:08:08.680892] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:29.960 [2024-11-20 09:08:08.681689] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:29.960 malloc0 00:14:29.960 [2024-11-20 09:08:08.745002] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:29.960 [2024-11-20 09:08:08.745077] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:29.960 [2024-11-20 09:08:08.745086] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:29.960 [2024-11-20 09:08:08.745093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:29.960 [2024-11-20 09:08:08.753959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:29.960 [2024-11-20 09:08:08.753979] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:29.960 [2024-11-20 09:08:08.760896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:29.960 [2024-11-20 09:08:08.760986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:29.960 [2024-11-20 09:08:08.777891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:29.960 0 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.960 09:08:08 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:30.219 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.219 09:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:30.219 "subsystems": [ 00:14:30.219 { 00:14:30.219 "subsystem": "fsdev", 00:14:30.219 "config": [ 00:14:30.219 { 00:14:30.219 "method": "fsdev_set_opts", 00:14:30.219 "params": { 00:14:30.219 "fsdev_io_pool_size": 65535, 00:14:30.219 "fsdev_io_cache_size": 256 00:14:30.219 } 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "keyring", 00:14:30.219 "config": [] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "iobuf", 00:14:30.219 "config": [ 00:14:30.219 { 00:14:30.219 "method": "iobuf_set_options", 00:14:30.219 "params": { 00:14:30.219 "small_pool_count": 8192, 00:14:30.219 "large_pool_count": 1024, 00:14:30.219 "small_bufsize": 8192, 00:14:30.219 "large_bufsize": 135168, 00:14:30.219 "enable_numa": false 00:14:30.219 } 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "sock", 00:14:30.219 "config": [ 00:14:30.219 { 00:14:30.219 "method": "sock_set_default_impl", 00:14:30.219 "params": { 00:14:30.219 "impl_name": "posix" 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "sock_impl_set_options", 00:14:30.219 "params": { 00:14:30.219 "impl_name": "ssl", 00:14:30.219 "recv_buf_size": 4096, 00:14:30.219 "send_buf_size": 4096, 00:14:30.219 "enable_recv_pipe": true, 00:14:30.219 "enable_quickack": false, 00:14:30.219 "enable_placement_id": 0, 00:14:30.219 "enable_zerocopy_send_server": true, 00:14:30.219 "enable_zerocopy_send_client": false, 00:14:30.219 "zerocopy_threshold": 0, 00:14:30.219 "tls_version": 0, 00:14:30.219 "enable_ktls": false 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "sock_impl_set_options", 00:14:30.219 "params": { 00:14:30.219 "impl_name": "posix", 00:14:30.219 "recv_buf_size": 2097152, 00:14:30.219 "send_buf_size": 2097152, 00:14:30.219 "enable_recv_pipe": true, 00:14:30.219 "enable_quickack": false, 00:14:30.219 "enable_placement_id": 0, 00:14:30.219 "enable_zerocopy_send_server": true, 00:14:30.219 "enable_zerocopy_send_client": false, 00:14:30.219 "zerocopy_threshold": 0, 00:14:30.219 "tls_version": 0, 00:14:30.219 "enable_ktls": false 00:14:30.219 } 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "vmd", 00:14:30.219 "config": [] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "accel", 00:14:30.219 "config": [ 00:14:30.219 { 00:14:30.219 "method": "accel_set_options", 00:14:30.219 "params": { 00:14:30.219 "small_cache_size": 128, 00:14:30.219 "large_cache_size": 16, 00:14:30.219 "task_count": 2048, 00:14:30.219 "sequence_count": 2048, 00:14:30.219 "buf_count": 2048 00:14:30.219 } 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "bdev", 00:14:30.219 "config": [ 00:14:30.219 { 00:14:30.219 "method": "bdev_set_options", 00:14:30.219 "params": { 00:14:30.219 "bdev_io_pool_size": 65535, 00:14:30.219 "bdev_io_cache_size": 256, 00:14:30.219 "bdev_auto_examine": true, 00:14:30.219 "iobuf_small_cache_size": 128, 00:14:30.219 "iobuf_large_cache_size": 16 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "bdev_raid_set_options", 00:14:30.219 "params": { 00:14:30.219 "process_window_size_kb": 1024, 00:14:30.219 "process_max_bandwidth_mb_sec": 0 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "bdev_iscsi_set_options", 00:14:30.219 "params": { 00:14:30.219 "timeout_sec": 30 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "bdev_nvme_set_options", 00:14:30.219 "params": { 00:14:30.219 "action_on_timeout": "none", 00:14:30.219 "timeout_us": 0, 00:14:30.219 "timeout_admin_us": 0, 00:14:30.219 "keep_alive_timeout_ms": 10000, 00:14:30.219 "arbitration_burst": 0, 00:14:30.219 "low_priority_weight": 0, 00:14:30.219 "medium_priority_weight": 0, 00:14:30.219 "high_priority_weight": 0, 00:14:30.219 "nvme_adminq_poll_period_us": 10000, 00:14:30.219 "nvme_ioq_poll_period_us": 0, 00:14:30.219 "io_queue_requests": 0, 00:14:30.219 "delay_cmd_submit": true, 00:14:30.219 "transport_retry_count": 4, 00:14:30.219 "bdev_retry_count": 3, 00:14:30.219 "transport_ack_timeout": 0, 00:14:30.219 "ctrlr_loss_timeout_sec": 0, 00:14:30.219 "reconnect_delay_sec": 0, 00:14:30.219 "fast_io_fail_timeout_sec": 0, 00:14:30.219 "disable_auto_failback": false, 00:14:30.219 "generate_uuids": false, 00:14:30.219 "transport_tos": 0, 00:14:30.219 "nvme_error_stat": false, 00:14:30.219 "rdma_srq_size": 0, 00:14:30.219 "io_path_stat": false, 00:14:30.219 "allow_accel_sequence": false, 00:14:30.219 "rdma_max_cq_size": 0, 00:14:30.219 "rdma_cm_event_timeout_ms": 0, 00:14:30.219 "dhchap_digests": [ 00:14:30.219 "sha256", 00:14:30.219 "sha384", 00:14:30.219 "sha512" 00:14:30.219 ], 00:14:30.219 "dhchap_dhgroups": [ 00:14:30.219 "null", 00:14:30.219 "ffdhe2048", 00:14:30.219 "ffdhe3072", 00:14:30.219 "ffdhe4096", 00:14:30.219 "ffdhe6144", 00:14:30.219 "ffdhe8192" 00:14:30.219 ] 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "bdev_nvme_set_hotplug", 00:14:30.219 "params": { 00:14:30.219 "period_us": 100000, 00:14:30.219 "enable": false 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "bdev_malloc_create", 00:14:30.219 "params": { 00:14:30.219 "name": "malloc0", 00:14:30.219 "num_blocks": 8192, 00:14:30.219 "block_size": 4096, 00:14:30.219 "physical_block_size": 4096, 00:14:30.219 "uuid": "dc354c8e-b87e-4062-994a-91eb1c58c515", 00:14:30.219 "optimal_io_boundary": 0, 00:14:30.219 "md_size": 0, 00:14:30.219 "dif_type": 0, 00:14:30.219 "dif_is_head_of_md": false, 00:14:30.219 "dif_pi_format": 0 00:14:30.219 } 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "method": "bdev_wait_for_examine" 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "scsi", 00:14:30.219 "config": null 00:14:30.219 }, 00:14:30.219 { 00:14:30.219 "subsystem": "scheduler", 00:14:30.219 "config": [ 00:14:30.219 { 00:14:30.219 "method": "framework_set_scheduler", 00:14:30.219 "params": { 00:14:30.219 "name": "static" 00:14:30.219 } 00:14:30.219 } 00:14:30.219 ] 00:14:30.219 }, 00:14:30.219 { 00:14:30.220 "subsystem": "vhost_scsi", 00:14:30.220 "config": [] 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "subsystem": "vhost_blk", 00:14:30.220 "config": [] 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "subsystem": "ublk", 00:14:30.220 "config": [ 00:14:30.220 { 00:14:30.220 "method": "ublk_create_target", 00:14:30.220 "params": { 00:14:30.220 "cpumask": "1" 00:14:30.220 } 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "method": "ublk_start_disk", 00:14:30.220 "params": { 00:14:30.220 "bdev_name": "malloc0", 00:14:30.220 "ublk_id": 0, 00:14:30.220 "num_queues": 1, 00:14:30.220 "queue_depth": 128 00:14:30.220 } 00:14:30.220 } 00:14:30.220 ] 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "subsystem": "nbd", 00:14:30.220 "config": [] 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "subsystem": "nvmf", 00:14:30.220 "config": [ 00:14:30.220 { 00:14:30.220 "method": "nvmf_set_config", 00:14:30.220 "params": { 00:14:30.220 "discovery_filter": "match_any", 00:14:30.220 "admin_cmd_passthru": { 00:14:30.220 "identify_ctrlr": false 00:14:30.220 }, 00:14:30.220 "dhchap_digests": [ 00:14:30.220 "sha256", 00:14:30.220 "sha384", 00:14:30.220 "sha512" 00:14:30.220 ], 00:14:30.220 "dhchap_dhgroups": [ 00:14:30.220 "null", 00:14:30.220 "ffdhe2048", 00:14:30.220 "ffdhe3072", 00:14:30.220 "ffdhe4096", 00:14:30.220 "ffdhe6144", 00:14:30.220 "ffdhe8192" 00:14:30.220 ] 00:14:30.220 } 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "method": "nvmf_set_max_subsystems", 00:14:30.220 "params": { 00:14:30.220 "max_subsystems": 1024 00:14:30.220 } 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "method": "nvmf_set_crdt", 00:14:30.220 "params": { 00:14:30.220 "crdt1": 0, 00:14:30.220 "crdt2": 0, 00:14:30.220 "crdt3": 0 00:14:30.220 } 00:14:30.220 } 00:14:30.220 ] 00:14:30.220 }, 00:14:30.220 { 00:14:30.220 "subsystem": "iscsi", 00:14:30.220 "config": [ 00:14:30.220 { 00:14:30.220 "method": "iscsi_set_options", 00:14:30.220 "params": { 00:14:30.220 "node_base": "iqn.2016-06.io.spdk", 00:14:30.220 "max_sessions": 128, 00:14:30.220 "max_connections_per_session": 2, 00:14:30.220 "max_queue_depth": 64, 00:14:30.220 "default_time2wait": 2, 00:14:30.220 "default_time2retain": 20, 00:14:30.220 "first_burst_length": 8192, 00:14:30.220 "immediate_data": true, 00:14:30.220 "allow_duplicated_isid": false, 00:14:30.220 "error_recovery_level": 0, 00:14:30.220 "nop_timeout": 60, 00:14:30.220 "nop_in_interval": 30, 00:14:30.220 "disable_chap": false, 00:14:30.220 "require_chap": false, 00:14:30.220 "mutual_chap": false, 00:14:30.220 "chap_group": 0, 00:14:30.220 "max_large_datain_per_connection": 64, 00:14:30.220 "max_r2t_per_connection": 4, 00:14:30.220 "pdu_pool_size": 36864, 00:14:30.220 "immediate_data_pool_size": 16384, 00:14:30.220 "data_out_pool_size": 2048 00:14:30.220 } 00:14:30.220 } 00:14:30.220 ] 00:14:30.220 } 00:14:30.220 ] 00:14:30.220 }' 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70960 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 70960 ']' 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 70960 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70960 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:30.220 killing process with pid 70960 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70960' 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 70960 00:14:30.220 09:08:09 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 70960 00:14:31.594 [2024-11-20 09:08:10.125096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:31.594 [2024-11-20 09:08:10.164912] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:31.594 [2024-11-20 09:08:10.165041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:31.594 [2024-11-20 09:08:10.172907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:31.594 [2024-11-20 09:08:10.172955] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:31.594 [2024-11-20 09:08:10.172966] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:31.594 [2024-11-20 09:08:10.172990] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:31.594 [2024-11-20 09:08:10.173124] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71009 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71009 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 71009 ']' 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.530 09:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:32.530 "subsystems": [ 00:14:32.530 { 00:14:32.530 "subsystem": "fsdev", 00:14:32.530 "config": [ 00:14:32.530 { 00:14:32.530 "method": "fsdev_set_opts", 00:14:32.530 "params": { 00:14:32.530 "fsdev_io_pool_size": 65535, 00:14:32.530 "fsdev_io_cache_size": 256 00:14:32.530 } 00:14:32.530 } 00:14:32.530 ] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "keyring", 00:14:32.530 "config": [] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "iobuf", 00:14:32.530 "config": [ 00:14:32.530 { 00:14:32.530 "method": "iobuf_set_options", 00:14:32.530 "params": { 00:14:32.530 "small_pool_count": 8192, 00:14:32.530 "large_pool_count": 1024, 00:14:32.530 "small_bufsize": 8192, 00:14:32.530 "large_bufsize": 135168, 00:14:32.530 "enable_numa": false 00:14:32.530 } 00:14:32.530 } 00:14:32.530 ] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "sock", 00:14:32.530 "config": [ 00:14:32.530 { 00:14:32.530 "method": "sock_set_default_impl", 00:14:32.530 "params": { 00:14:32.530 "impl_name": "posix" 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "sock_impl_set_options", 00:14:32.530 "params": { 00:14:32.530 "impl_name": "ssl", 00:14:32.530 "recv_buf_size": 4096, 00:14:32.530 "send_buf_size": 4096, 00:14:32.530 "enable_recv_pipe": true, 00:14:32.530 "enable_quickack": false, 00:14:32.530 "enable_placement_id": 0, 00:14:32.530 "enable_zerocopy_send_server": true, 00:14:32.530 "enable_zerocopy_send_client": false, 00:14:32.530 "zerocopy_threshold": 0, 00:14:32.530 "tls_version": 0, 00:14:32.530 "enable_ktls": false 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "sock_impl_set_options", 00:14:32.530 "params": { 00:14:32.530 "impl_name": "posix", 00:14:32.530 "recv_buf_size": 2097152, 00:14:32.530 "send_buf_size": 2097152, 00:14:32.530 "enable_recv_pipe": true, 00:14:32.530 "enable_quickack": false, 00:14:32.530 "enable_placement_id": 0, 00:14:32.530 "enable_zerocopy_send_server": true, 00:14:32.530 "enable_zerocopy_send_client": false, 00:14:32.530 "zerocopy_threshold": 0, 00:14:32.530 "tls_version": 0, 00:14:32.530 "enable_ktls": false 00:14:32.530 } 00:14:32.530 } 00:14:32.530 ] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "vmd", 00:14:32.530 "config": [] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "accel", 00:14:32.530 "config": [ 00:14:32.530 { 00:14:32.530 "method": "accel_set_options", 00:14:32.530 "params": { 00:14:32.530 "small_cache_size": 128, 00:14:32.530 "large_cache_size": 16, 00:14:32.530 "task_count": 2048, 00:14:32.530 "sequence_count": 2048, 00:14:32.530 "buf_count": 2048 00:14:32.530 } 00:14:32.530 } 00:14:32.530 ] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "bdev", 00:14:32.530 "config": [ 00:14:32.530 { 00:14:32.530 "method": "bdev_set_options", 00:14:32.530 "params": { 00:14:32.530 "bdev_io_pool_size": 65535, 00:14:32.530 "bdev_io_cache_size": 256, 00:14:32.530 "bdev_auto_examine": true, 00:14:32.530 "iobuf_small_cache_size": 128, 00:14:32.530 "iobuf_large_cache_size": 16 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "bdev_raid_set_options", 00:14:32.530 "params": { 00:14:32.530 "process_window_size_kb": 1024, 00:14:32.530 "process_max_bandwidth_mb_sec": 0 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "bdev_iscsi_set_options", 00:14:32.530 "params": { 00:14:32.530 "timeout_sec": 30 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "bdev_nvme_set_options", 00:14:32.530 "params": { 00:14:32.530 "action_on_timeout": "none", 00:14:32.530 "timeout_us": 0, 00:14:32.530 "timeout_admin_us": 0, 00:14:32.530 "keep_alive_timeout_ms": 10000, 00:14:32.530 "arbitration_burst": 0, 00:14:32.530 "low_priority_weight": 0, 00:14:32.530 "medium_priority_weight": 0, 00:14:32.530 "high_priority_weight": 0, 00:14:32.530 "nvme_adminq_poll_period_us": 10000, 00:14:32.530 "nvme_ioq_poll_period_us": 0, 00:14:32.530 "io_queue_requests": 0, 00:14:32.530 "delay_cmd_submit": true, 00:14:32.530 "transport_retry_count": 4, 00:14:32.530 "bdev_retry_count": 3, 00:14:32.530 "transport_ack_timeout": 0, 00:14:32.530 "ctrlr_loss_timeout_sec": 0, 00:14:32.530 "reconnect_delay_sec": 0, 00:14:32.530 "fast_io_fail_timeout_sec": 0, 00:14:32.530 "disable_auto_failback": false, 00:14:32.530 "generate_uuids": false, 00:14:32.530 "transport_tos": 0, 00:14:32.530 "nvme_error_stat": false, 00:14:32.530 "rdma_srq_size": 0, 00:14:32.530 "io_path_stat": false, 00:14:32.530 "allow_accel_sequence": false, 00:14:32.530 "rdma_max_cq_size": 0, 00:14:32.530 "rdma_cm_event_timeout_ms": 0, 00:14:32.530 "dhchap_digests": [ 00:14:32.530 "sha256", 00:14:32.530 "sha384", 00:14:32.530 "sha512" 00:14:32.530 ], 00:14:32.530 "dhchap_dhgroups": [ 00:14:32.530 "null", 00:14:32.530 "ffdhe2048", 00:14:32.530 "ffdhe3072", 00:14:32.530 "ffdhe4096", 00:14:32.530 "ffdhe6144", 00:14:32.530 "ffdhe8192" 00:14:32.530 ] 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "bdev_nvme_set_hotplug", 00:14:32.530 "params": { 00:14:32.530 "period_us": 100000, 00:14:32.530 "enable": false 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "bdev_malloc_create", 00:14:32.530 "params": { 00:14:32.530 "name": "malloc0", 00:14:32.530 "num_blocks": 8192, 00:14:32.530 "block_size": 4096, 00:14:32.530 "physical_block_size": 4096, 00:14:32.530 "uuid": "dc354c8e-b87e-4062-994a-91eb1c58c515", 00:14:32.530 "optimal_io_boundary": 0, 00:14:32.530 "md_size": 0, 00:14:32.530 "dif_type": 0, 00:14:32.530 "dif_is_head_of_md": false, 00:14:32.530 "dif_pi_format": 0 00:14:32.530 } 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "method": "bdev_wait_for_examine" 00:14:32.530 } 00:14:32.530 ] 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "scsi", 00:14:32.530 "config": null 00:14:32.530 }, 00:14:32.530 { 00:14:32.530 "subsystem": "scheduler", 00:14:32.530 "config": [ 00:14:32.530 { 00:14:32.530 "method": "framework_set_scheduler", 00:14:32.530 "params": { 00:14:32.530 "name": "static" 00:14:32.530 } 00:14:32.530 } 00:14:32.530 ] 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "subsystem": "vhost_scsi", 00:14:32.531 "config": [] 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "subsystem": "vhost_blk", 00:14:32.531 "config": [] 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "subsystem": "ublk", 00:14:32.531 "config": [ 00:14:32.531 { 00:14:32.531 "method": "ublk_create_target", 00:14:32.531 "params": { 00:14:32.531 "cpumask": "1" 00:14:32.531 } 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "method": "ublk_start_disk", 00:14:32.531 "params": { 00:14:32.531 "bdev_name": "malloc0", 00:14:32.531 "ublk_id": 0, 00:14:32.531 "num_queues": 1, 00:14:32.531 "queue_depth": 128 00:14:32.531 } 00:14:32.531 } 00:14:32.531 ] 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "subsystem": "nbd", 00:14:32.531 "config": [] 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "subsystem": "nvmf", 00:14:32.531 "config": [ 00:14:32.531 { 00:14:32.531 "method": "nvmf_set_config", 00:14:32.531 "params": { 00:14:32.531 "discovery_filter": "match_any", 00:14:32.531 "admin_cmd_passthru": { 00:14:32.531 "identify_ctrlr": false 00:14:32.531 }, 00:14:32.531 "dhchap_digests": [ 00:14:32.531 "sha256", 00:14:32.531 "sha384", 00:14:32.531 "sha512" 00:14:32.531 ], 00:14:32.531 "dhchap_dhgroups": [ 00:14:32.531 "null", 00:14:32.531 "ffdhe2048", 00:14:32.531 "ffdhe3072", 00:14:32.531 "ffdhe4096", 00:14:32.531 "ffdhe6144", 00:14:32.531 "ffdhe8192" 00:14:32.531 ] 00:14:32.531 } 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "method": "nvmf_set_max_subsystems", 00:14:32.531 "params": { 00:14:32.531 "max_subsystems": 1024 00:14:32.531 } 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "method": "nvmf_set_crdt", 00:14:32.531 "params": { 00:14:32.531 "crdt1": 0, 00:14:32.531 "crdt2": 0, 00:14:32.531 "crdt3": 0 00:14:32.531 } 00:14:32.531 } 00:14:32.531 ] 00:14:32.531 }, 00:14:32.531 { 00:14:32.531 "subsystem": "iscsi", 00:14:32.531 "config": [ 00:14:32.531 { 00:14:32.531 "method": "iscsi_set_options", 00:14:32.531 "params": { 00:14:32.531 "node_base": "iqn.2016-06.io.spdk", 00:14:32.531 "max_sessions": 128, 00:14:32.531 "max_connections_per_session": 2, 00:14:32.531 "max_queue_depth": 64, 00:14:32.531 "default_time2wait": 2, 00:14:32.531 "default_time2retain": 20, 00:14:32.531 "first_burst_length": 8192, 00:14:32.531 "immediate_data": true, 00:14:32.531 "allow_duplicated_isid": false, 00:14:32.531 "error_recovery_level": 0, 00:14:32.531 "nop_timeout": 60, 00:14:32.531 "nop_in_interval": 30, 00:14:32.531 "disable_chap": false, 00:14:32.531 "require_chap": false, 00:14:32.531 "mutual_chap": false, 00:14:32.531 "chap_group": 0, 00:14:32.531 "max_large_datain_per_connection": 64, 00:14:32.531 "max_r2t_per_connection": 4, 00:14:32.531 "pdu_pool_size": 36864, 00:14:32.531 "immediate_data_pool_size": 16384, 00:14:32.531 "data_out_pool_size": 2048 00:14:32.531 } 00:14:32.531 } 00:14:32.531 ] 00:14:32.531 } 00:14:32.531 ] 00:14:32.531 }' 00:14:32.531 09:08:11 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:32.531 09:08:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:32.790 [2024-11-20 09:08:11.514156] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:14:32.790 [2024-11-20 09:08:11.514273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71009 ] 00:14:32.790 [2024-11-20 09:08:11.666892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.048 [2024-11-20 09:08:11.743730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.614 [2024-11-20 09:08:12.375888] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:33.614 [2024-11-20 09:08:12.376547] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:33.614 [2024-11-20 09:08:12.383975] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:33.614 [2024-11-20 09:08:12.384035] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:33.614 [2024-11-20 09:08:12.384046] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:33.614 [2024-11-20 09:08:12.384052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:33.614 [2024-11-20 09:08:12.392937] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:33.614 [2024-11-20 09:08:12.392950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:33.614 [2024-11-20 09:08:12.399891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:33.614 [2024-11-20 09:08:12.399966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:33.614 [2024-11-20 09:08:12.416888] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71009 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 71009 ']' 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 71009 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71009 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:33.614 killing process with pid 71009 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71009' 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 71009 00:14:33.614 09:08:12 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 71009 00:14:34.988 [2024-11-20 09:08:13.725971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:34.988 [2024-11-20 09:08:13.765985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:34.988 [2024-11-20 09:08:13.766117] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:34.988 [2024-11-20 09:08:13.774905] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:34.988 [2024-11-20 09:08:13.774960] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:34.988 [2024-11-20 09:08:13.774968] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:34.988 [2024-11-20 09:08:13.774996] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:34.988 [2024-11-20 09:08:13.775141] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:36.362 09:08:15 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:36.362 00:14:36.362 real 0m7.335s 00:14:36.362 user 0m5.019s 00:14:36.362 sys 0m2.898s 00:14:36.362 09:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:36.362 09:08:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:36.362 ************************************ 00:14:36.362 END TEST test_save_ublk_config 00:14:36.362 ************************************ 00:14:36.362 09:08:15 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71088 00:14:36.362 09:08:15 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:36.362 09:08:15 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:36.362 09:08:15 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71088 00:14:36.362 09:08:15 ublk -- common/autotest_common.sh@835 -- # '[' -z 71088 ']' 00:14:36.362 09:08:15 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.362 09:08:15 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.362 09:08:15 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.362 09:08:15 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.362 09:08:15 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:36.362 [2024-11-20 09:08:15.187907] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:14:36.362 [2024-11-20 09:08:15.188022] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71088 ] 00:14:36.620 [2024-11-20 09:08:15.341408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.620 [2024-11-20 09:08:15.434770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.620 [2024-11-20 09:08:15.434789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.187 09:08:16 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.187 09:08:16 ublk -- common/autotest_common.sh@868 -- # return 0 00:14:37.187 09:08:16 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:37.187 09:08:16 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:37.187 09:08:16 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.187 09:08:16 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.187 ************************************ 00:14:37.187 START TEST test_create_ublk 00:14:37.187 ************************************ 00:14:37.187 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:14:37.187 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:37.187 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.187 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.187 [2024-11-20 09:08:16.037891] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:37.187 [2024-11-20 09:08:16.039595] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:37.187 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.187 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:37.187 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:37.187 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.187 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.446 [2024-11-20 09:08:16.206011] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:37.446 [2024-11-20 09:08:16.206336] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:37.446 [2024-11-20 09:08:16.206350] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:37.446 [2024-11-20 09:08:16.206357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:37.446 [2024-11-20 09:08:16.215113] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:37.446 [2024-11-20 09:08:16.215133] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:37.446 [2024-11-20 09:08:16.221900] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:37.446 [2024-11-20 09:08:16.228938] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:37.446 [2024-11-20 09:08:16.243914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.446 09:08:16 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:37.446 { 00:14:37.446 "ublk_device": "/dev/ublkb0", 00:14:37.446 "id": 0, 00:14:37.446 "queue_depth": 512, 00:14:37.446 "num_queues": 4, 00:14:37.446 "bdev_name": "Malloc0" 00:14:37.446 } 00:14:37.446 ]' 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:37.446 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:37.447 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:37.447 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:37.447 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:37.705 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:37.705 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:37.705 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:37.705 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:37.705 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:37.705 09:08:16 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:37.705 09:08:16 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:37.705 fio: verification read phase will never start because write phase uses all of runtime 00:14:37.705 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:37.705 fio-3.35 00:14:37.705 Starting 1 process 00:14:49.918 00:14:49.918 fio_test: (groupid=0, jobs=1): err= 0: pid=71128: Wed Nov 20 09:08:26 2024 00:14:49.918 write: IOPS=15.5k, BW=60.4MiB/s (63.3MB/s)(604MiB/10001msec); 0 zone resets 00:14:49.918 clat (usec): min=34, max=4043, avg=63.97, stdev=90.98 00:14:49.918 lat (usec): min=34, max=4044, avg=64.40, stdev=91.00 00:14:49.918 clat percentiles (usec): 00:14:49.918 | 1.00th=[ 43], 5.00th=[ 46], 10.00th=[ 49], 20.00th=[ 53], 00:14:49.918 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 60], 60.00th=[ 63], 00:14:49.918 | 70.00th=[ 65], 80.00th=[ 69], 90.00th=[ 73], 95.00th=[ 78], 00:14:49.918 | 99.00th=[ 89], 99.50th=[ 105], 99.90th=[ 1778], 99.95th=[ 2704], 00:14:49.918 | 99.99th=[ 3523] 00:14:49.918 bw ( KiB/s): min=52752, max=71504, per=99.43%, avg=61445.89, stdev=6031.93, samples=19 00:14:49.918 iops : min=13188, max=17876, avg=15361.47, stdev=1507.98, samples=19 00:14:49.918 lat (usec) : 50=13.22%, 100=86.23%, 250=0.27%, 500=0.12%, 750=0.01% 00:14:49.918 lat (usec) : 1000=0.01% 00:14:49.918 lat (msec) : 2=0.06%, 4=0.08%, 10=0.01% 00:14:49.918 cpu : usr=2.34%, sys=11.99%, ctx=154515, majf=0, minf=795 00:14:49.918 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:49.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.918 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:49.918 issued rwts: total=0,154516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:49.918 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:49.918 00:14:49.918 Run status group 0 (all jobs): 00:14:49.918 WRITE: bw=60.4MiB/s (63.3MB/s), 60.4MiB/s-60.4MiB/s (63.3MB/s-63.3MB/s), io=604MiB (633MB), run=10001-10001msec 00:14:49.918 00:14:49.918 Disk stats (read/write): 00:14:49.918 ublkb0: ios=0/152730, merge=0/0, ticks=0/8413, in_queue=8414, util=99.10% 00:14:49.918 09:08:26 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.918 [2024-11-20 09:08:26.665913] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:49.918 [2024-11-20 09:08:26.701364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:49.918 [2024-11-20 09:08:26.702255] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:49.918 [2024-11-20 09:08:26.710902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:49.918 [2024-11-20 09:08:26.711134] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:49.918 [2024-11-20 09:08:26.711148] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.918 09:08:26 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.918 [2024-11-20 09:08:26.725968] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:49.918 request: 00:14:49.918 { 00:14:49.918 "ublk_id": 0, 00:14:49.918 "method": "ublk_stop_disk", 00:14:49.918 "req_id": 1 00:14:49.918 } 00:14:49.918 Got JSON-RPC error response 00:14:49.918 response: 00:14:49.918 { 00:14:49.918 "code": -19, 00:14:49.918 "message": "No such device" 00:14:49.918 } 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.918 09:08:26 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.918 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 [2024-11-20 09:08:26.742951] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:49.919 [2024-11-20 09:08:26.746651] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:49.919 [2024-11-20 09:08:26.746685] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:49.919 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:26 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:49.919 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:49.919 ************************************ 00:14:49.919 END TEST test_create_ublk 00:14:49.919 ************************************ 00:14:49.919 09:08:27 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:49.919 00:14:49.919 real 0m11.181s 00:14:49.919 user 0m0.545s 00:14:49.919 sys 0m1.275s 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:49.919 09:08:27 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:49.919 09:08:27 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:49.919 09:08:27 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 ************************************ 00:14:49.919 START TEST test_create_multi_ublk 00:14:49.919 ************************************ 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 [2024-11-20 09:08:27.265881] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:49.919 [2024-11-20 09:08:27.267478] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 [2024-11-20 09:08:27.493997] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:49.919 [2024-11-20 09:08:27.494305] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:49.919 [2024-11-20 09:08:27.494316] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:49.919 [2024-11-20 09:08:27.494325] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:49.919 [2024-11-20 09:08:27.505932] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:49.919 [2024-11-20 09:08:27.505952] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:49.919 [2024-11-20 09:08:27.517891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:49.919 [2024-11-20 09:08:27.518409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:49.919 [2024-11-20 09:08:27.565894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 [2024-11-20 09:08:27.779995] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:49.919 [2024-11-20 09:08:27.780316] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:49.919 [2024-11-20 09:08:27.780330] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:49.919 [2024-11-20 09:08:27.780335] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:49.919 [2024-11-20 09:08:27.787903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:49.919 [2024-11-20 09:08:27.787920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:49.919 [2024-11-20 09:08:27.795891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:49.919 [2024-11-20 09:08:27.796389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:49.919 [2024-11-20 09:08:27.804920] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 [2024-11-20 09:08:27.971977] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:49.919 [2024-11-20 09:08:27.972288] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:49.919 [2024-11-20 09:08:27.972301] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:49.919 [2024-11-20 09:08:27.972307] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:49.919 [2024-11-20 09:08:27.979897] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:49.919 [2024-11-20 09:08:27.979917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:49.919 [2024-11-20 09:08:27.987895] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:49.919 [2024-11-20 09:08:27.988408] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:49.919 [2024-11-20 09:08:27.991700] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.919 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:49.919 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:49.919 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.919 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.919 [2024-11-20 09:08:28.143989] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:49.919 [2024-11-20 09:08:28.144292] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:49.919 [2024-11-20 09:08:28.144305] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:49.919 [2024-11-20 09:08:28.144311] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:49.919 [2024-11-20 09:08:28.151904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:49.919 [2024-11-20 09:08:28.151920] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:49.919 [2024-11-20 09:08:28.159892] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:49.919 [2024-11-20 09:08:28.160392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:49.920 [2024-11-20 09:08:28.168923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:49.920 { 00:14:49.920 "ublk_device": "/dev/ublkb0", 00:14:49.920 "id": 0, 00:14:49.920 "queue_depth": 512, 00:14:49.920 "num_queues": 4, 00:14:49.920 "bdev_name": "Malloc0" 00:14:49.920 }, 00:14:49.920 { 00:14:49.920 "ublk_device": "/dev/ublkb1", 00:14:49.920 "id": 1, 00:14:49.920 "queue_depth": 512, 00:14:49.920 "num_queues": 4, 00:14:49.920 "bdev_name": "Malloc1" 00:14:49.920 }, 00:14:49.920 { 00:14:49.920 "ublk_device": "/dev/ublkb2", 00:14:49.920 "id": 2, 00:14:49.920 "queue_depth": 512, 00:14:49.920 "num_queues": 4, 00:14:49.920 "bdev_name": "Malloc2" 00:14:49.920 }, 00:14:49.920 { 00:14:49.920 "ublk_device": "/dev/ublkb3", 00:14:49.920 "id": 3, 00:14:49.920 "queue_depth": 512, 00:14:49.920 "num_queues": 4, 00:14:49.920 "bdev_name": "Malloc3" 00:14:49.920 } 00:14:49.920 ]' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.920 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.920 [2024-11-20 09:08:28.807986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:50.187 [2024-11-20 09:08:28.841275] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:50.187 [2024-11-20 09:08:28.842382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:50.187 [2024-11-20 09:08:28.847901] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:50.187 [2024-11-20 09:08:28.848133] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:50.187 [2024-11-20 09:08:28.848147] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:50.187 [2024-11-20 09:08:28.862969] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:50.187 [2024-11-20 09:08:28.903928] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:50.187 [2024-11-20 09:08:28.904600] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:50.187 [2024-11-20 09:08:28.911904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:50.187 [2024-11-20 09:08:28.912148] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:50.187 [2024-11-20 09:08:28.912162] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:50.187 [2024-11-20 09:08:28.927972] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:50.187 [2024-11-20 09:08:28.967902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:50.187 [2024-11-20 09:08:28.968580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:50.187 [2024-11-20 09:08:28.975914] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:50.187 [2024-11-20 09:08:28.976151] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:50.187 [2024-11-20 09:08:28.976165] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.187 09:08:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:50.187 [2024-11-20 09:08:28.991971] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:50.187 [2024-11-20 09:08:29.027929] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:50.187 [2024-11-20 09:08:29.028515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:50.187 [2024-11-20 09:08:29.035899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:50.187 [2024-11-20 09:08:29.036123] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:50.187 [2024-11-20 09:08:29.036137] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:50.187 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.187 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:50.447 [2024-11-20 09:08:29.235959] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:50.447 [2024-11-20 09:08:29.239519] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:50.447 [2024-11-20 09:08:29.239550] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:50.447 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:50.447 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:50.447 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:50.447 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.447 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.014 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:51.014 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:51.014 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.014 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.271 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.271 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:51.271 09:08:29 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:51.271 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.271 09:08:29 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:51.530 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:51.788 ************************************ 00:14:51.788 END TEST test_create_multi_ublk 00:14:51.788 ************************************ 00:14:51.788 09:08:30 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:51.788 00:14:51.788 real 0m3.224s 00:14:51.788 user 0m0.798s 00:14:51.788 sys 0m0.136s 00:14:51.788 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.788 09:08:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:51.788 09:08:30 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:51.788 09:08:30 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:51.788 09:08:30 ublk -- ublk/ublk.sh@130 -- # killprocess 71088 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@954 -- # '[' -z 71088 ']' 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@958 -- # kill -0 71088 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@959 -- # uname 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71088 00:14:51.788 killing process with pid 71088 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71088' 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@973 -- # kill 71088 00:14:51.788 09:08:30 ublk -- common/autotest_common.sh@978 -- # wait 71088 00:14:52.354 [2024-11-20 09:08:31.128179] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:52.354 [2024-11-20 09:08:31.128232] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:53.290 00:14:53.290 real 0m24.363s 00:14:53.290 user 0m35.075s 00:14:53.290 sys 0m9.136s 00:14:53.290 09:08:31 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:53.290 09:08:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:53.290 ************************************ 00:14:53.290 END TEST ublk 00:14:53.290 ************************************ 00:14:53.290 09:08:31 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:53.290 09:08:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:53.290 09:08:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:53.290 09:08:31 -- common/autotest_common.sh@10 -- # set +x 00:14:53.290 ************************************ 00:14:53.290 START TEST ublk_recovery 00:14:53.290 ************************************ 00:14:53.290 09:08:31 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:53.290 * Looking for test storage... 00:14:53.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:53.290 09:08:32 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.290 --rc genhtml_branch_coverage=1 00:14:53.290 --rc genhtml_function_coverage=1 00:14:53.290 --rc genhtml_legend=1 00:14:53.290 --rc geninfo_all_blocks=1 00:14:53.290 --rc geninfo_unexecuted_blocks=1 00:14:53.290 00:14:53.290 ' 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.290 --rc genhtml_branch_coverage=1 00:14:53.290 --rc genhtml_function_coverage=1 00:14:53.290 --rc genhtml_legend=1 00:14:53.290 --rc geninfo_all_blocks=1 00:14:53.290 --rc geninfo_unexecuted_blocks=1 00:14:53.290 00:14:53.290 ' 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.290 --rc genhtml_branch_coverage=1 00:14:53.290 --rc genhtml_function_coverage=1 00:14:53.290 --rc genhtml_legend=1 00:14:53.290 --rc geninfo_all_blocks=1 00:14:53.290 --rc geninfo_unexecuted_blocks=1 00:14:53.290 00:14:53.290 ' 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:53.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.290 --rc genhtml_branch_coverage=1 00:14:53.290 --rc genhtml_function_coverage=1 00:14:53.290 --rc genhtml_legend=1 00:14:53.290 --rc geninfo_all_blocks=1 00:14:53.290 --rc geninfo_unexecuted_blocks=1 00:14:53.290 00:14:53.290 ' 00:14:53.290 09:08:32 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:53.290 09:08:32 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:53.290 09:08:32 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:53.290 09:08:32 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71482 00:14:53.290 09:08:32 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:53.290 09:08:32 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71482 00:14:53.290 09:08:32 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71482 ']' 00:14:53.290 09:08:32 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.291 09:08:32 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.291 09:08:32 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.291 09:08:32 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.291 09:08:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:53.551 [2024-11-20 09:08:32.222434] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:14:53.551 [2024-11-20 09:08:32.222684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71482 ] 00:14:53.551 [2024-11-20 09:08:32.380706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:53.811 [2024-11-20 09:08:32.480885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.811 [2024-11-20 09:08:32.480917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:14:54.382 09:08:33 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 [2024-11-20 09:08:33.072893] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:54.382 [2024-11-20 09:08:33.074754] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 09:08:33 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 malloc0 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 09:08:33 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 [2024-11-20 09:08:33.177039] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:54.382 [2024-11-20 09:08:33.177139] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:54.382 [2024-11-20 09:08:33.177150] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:54.382 [2024-11-20 09:08:33.177159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:54.382 [2024-11-20 09:08:33.185988] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:54.382 [2024-11-20 09:08:33.186004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:54.382 [2024-11-20 09:08:33.192902] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:54.382 [2024-11-20 09:08:33.193041] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:54.382 [2024-11-20 09:08:33.207904] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:54.382 1 00:14:54.382 09:08:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 09:08:33 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:55.327 09:08:34 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71517 00:14:55.327 09:08:34 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:55.327 09:08:34 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:55.589 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:55.589 fio-3.35 00:14:55.589 Starting 1 process 00:15:00.860 09:08:39 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71482 00:15:00.860 09:08:39 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:06.148 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71482 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:06.148 09:08:44 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71632 00:15:06.148 09:08:44 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:06.148 09:08:44 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.148 09:08:44 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71632 00:15:06.148 09:08:44 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 71632 ']' 00:15:06.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.148 09:08:44 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.148 09:08:44 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.148 09:08:44 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.148 09:08:44 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.148 09:08:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.148 [2024-11-20 09:08:44.309862] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:15:06.148 [2024-11-20 09:08:44.310000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71632 ] 00:15:06.148 [2024-11-20 09:08:44.458025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:06.148 [2024-11-20 09:08:44.541187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:06.148 [2024-11-20 09:08:44.541245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:15:06.406 09:08:45 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 [2024-11-20 09:08:45.092892] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:06.406 [2024-11-20 09:08:45.094439] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.406 09:08:45 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 malloc0 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.406 09:08:45 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:06.406 [2024-11-20 09:08:45.173101] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:06.406 [2024-11-20 09:08:45.173130] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:06.406 [2024-11-20 09:08:45.173137] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:06.406 [2024-11-20 09:08:45.180921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:06.406 [2024-11-20 09:08:45.180943] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:15:06.406 [2024-11-20 09:08:45.180950] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:06.406 [2024-11-20 09:08:45.181014] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:06.406 1 00:15:06.406 09:08:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.406 09:08:45 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71517 00:15:06.406 [2024-11-20 09:08:45.188894] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:06.406 [2024-11-20 09:08:45.195216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:06.406 [2024-11-20 09:08:45.203046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:06.406 [2024-11-20 09:08:45.203065] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:02.617 00:16:02.617 fio_test: (groupid=0, jobs=1): err= 0: pid=71520: Wed Nov 20 09:09:34 2024 00:16:02.617 read: IOPS=26.5k, BW=104MiB/s (109MB/s)(6223MiB/60002msec) 00:16:02.617 slat (nsec): min=878, max=705797, avg=4988.61, stdev=1577.98 00:16:02.617 clat (usec): min=698, max=5989.2k, avg=2363.53, stdev=37647.24 00:16:02.617 lat (usec): min=710, max=5989.2k, avg=2368.52, stdev=37647.24 00:16:02.617 clat percentiles (usec): 00:16:02.617 | 1.00th=[ 1778], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1942], 00:16:02.617 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2024], 00:16:02.617 | 70.00th=[ 2040], 80.00th=[ 2114], 90.00th=[ 2278], 95.00th=[ 2868], 00:16:02.617 | 99.00th=[ 4752], 99.50th=[ 5211], 99.90th=[ 6521], 99.95th=[ 7308], 00:16:02.617 | 99.99th=[13042] 00:16:02.617 bw ( KiB/s): min=26456, max=124200, per=100.00%, avg=117143.11, stdev=12818.26, samples=108 00:16:02.617 iops : min= 6614, max=31050, avg=29285.78, stdev=3204.56, samples=108 00:16:02.617 write: IOPS=26.5k, BW=104MiB/s (109MB/s)(6216MiB/60002msec); 0 zone resets 00:16:02.617 slat (nsec): min=965, max=541108, avg=5035.11, stdev=1562.49 00:16:02.617 clat (usec): min=603, max=5989.2k, avg=2449.07, stdev=38259.23 00:16:02.617 lat (usec): min=607, max=5989.2k, avg=2454.10, stdev=38259.23 00:16:02.617 clat percentiles (usec): 00:16:02.617 | 1.00th=[ 1827], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2040], 00:16:02.617 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:16:02.617 | 70.00th=[ 2114], 80.00th=[ 2180], 90.00th=[ 2376], 95.00th=[ 2802], 00:16:02.617 | 99.00th=[ 4686], 99.50th=[ 5276], 99.90th=[ 6259], 99.95th=[ 7373], 00:16:02.617 | 99.99th=[13173] 00:16:02.617 bw ( KiB/s): min=26976, max=123880, per=100.00%, avg=117023.48, stdev=12773.96, samples=108 00:16:02.617 iops : min= 6744, max=30970, avg=29255.87, stdev=3193.49, samples=108 00:16:02.617 lat (usec) : 750=0.01%, 1000=0.01% 00:16:02.617 lat (msec) : 2=29.93%, 4=67.87%, 10=2.19%, 20=0.01%, >=2000=0.01% 00:16:02.617 cpu : usr=6.08%, sys=27.41%, ctx=106054, majf=0, minf=14 00:16:02.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:02.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:02.617 issued rwts: total=1593037,1591385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:02.617 00:16:02.617 Run status group 0 (all jobs): 00:16:02.617 READ: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=6223MiB (6525MB), run=60002-60002msec 00:16:02.617 WRITE: bw=104MiB/s (109MB/s), 104MiB/s-104MiB/s (109MB/s-109MB/s), io=6216MiB (6518MB), run=60002-60002msec 00:16:02.617 00:16:02.617 Disk stats (read/write): 00:16:02.617 ublkb1: ios=1590265/1588645, merge=0/0, ticks=3671274/3673188, in_queue=7344462, util=99.90% 00:16:02.617 09:09:34 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.617 [2024-11-20 09:09:34.476638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:02.617 [2024-11-20 09:09:34.515926] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:02.617 [2024-11-20 09:09:34.516109] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:02.617 [2024-11-20 09:09:34.524913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:02.617 [2024-11-20 09:09:34.528996] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:02.617 [2024-11-20 09:09:34.529015] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.617 09:09:34 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.617 [2024-11-20 09:09:34.533049] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:02.617 [2024-11-20 09:09:34.539893] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:02.617 [2024-11-20 09:09:34.539930] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.617 09:09:34 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:02.617 09:09:34 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:02.617 09:09:34 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71632 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 71632 ']' 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 71632 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71632 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.617 killing process with pid 71632 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71632' 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@973 -- # kill 71632 00:16:02.617 09:09:34 ublk_recovery -- common/autotest_common.sh@978 -- # wait 71632 00:16:02.617 [2024-11-20 09:09:35.763486] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:02.617 [2024-11-20 09:09:35.763536] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:02.617 ************************************ 00:16:02.617 END TEST ublk_recovery 00:16:02.617 ************************************ 00:16:02.617 00:16:02.617 real 1m4.603s 00:16:02.617 user 1m44.240s 00:16:02.617 sys 0m34.228s 00:16:02.617 09:09:36 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.617 09:09:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:02.617 09:09:36 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:16:02.617 09:09:36 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:02.617 09:09:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:02.617 09:09:36 -- common/autotest_common.sh@10 -- # set +x 00:16:02.617 09:09:36 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:16:02.617 09:09:36 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:02.617 09:09:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:02.617 09:09:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.617 09:09:36 -- common/autotest_common.sh@10 -- # set +x 00:16:02.617 ************************************ 00:16:02.617 START TEST ftl 00:16:02.617 ************************************ 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:02.617 * Looking for test storage... 00:16:02.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.617 09:09:36 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.617 09:09:36 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.617 09:09:36 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.617 09:09:36 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.617 09:09:36 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.617 09:09:36 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:02.617 09:09:36 ftl -- scripts/common.sh@345 -- # : 1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.617 09:09:36 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.617 09:09:36 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@353 -- # local d=1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.617 09:09:36 ftl -- scripts/common.sh@355 -- # echo 1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.617 09:09:36 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@353 -- # local d=2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.617 09:09:36 ftl -- scripts/common.sh@355 -- # echo 2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.617 09:09:36 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.617 09:09:36 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.617 09:09:36 ftl -- scripts/common.sh@368 -- # return 0 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:02.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.617 --rc genhtml_branch_coverage=1 00:16:02.617 --rc genhtml_function_coverage=1 00:16:02.617 --rc genhtml_legend=1 00:16:02.617 --rc geninfo_all_blocks=1 00:16:02.617 --rc geninfo_unexecuted_blocks=1 00:16:02.617 00:16:02.617 ' 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:02.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.617 --rc genhtml_branch_coverage=1 00:16:02.617 --rc genhtml_function_coverage=1 00:16:02.617 --rc genhtml_legend=1 00:16:02.617 --rc geninfo_all_blocks=1 00:16:02.617 --rc geninfo_unexecuted_blocks=1 00:16:02.617 00:16:02.617 ' 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:02.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.617 --rc genhtml_branch_coverage=1 00:16:02.617 --rc genhtml_function_coverage=1 00:16:02.617 --rc genhtml_legend=1 00:16:02.617 --rc geninfo_all_blocks=1 00:16:02.617 --rc geninfo_unexecuted_blocks=1 00:16:02.617 00:16:02.617 ' 00:16:02.617 09:09:36 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:02.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.617 --rc genhtml_branch_coverage=1 00:16:02.617 --rc genhtml_function_coverage=1 00:16:02.617 --rc genhtml_legend=1 00:16:02.617 --rc geninfo_all_blocks=1 00:16:02.617 --rc geninfo_unexecuted_blocks=1 00:16:02.617 00:16:02.617 ' 00:16:02.617 09:09:36 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:02.617 09:09:36 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:02.617 09:09:36 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.617 09:09:36 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.617 09:09:36 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:02.617 09:09:36 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:02.617 09:09:36 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.617 09:09:36 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:02.617 09:09:36 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:02.617 09:09:36 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.617 09:09:36 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.617 09:09:36 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:02.617 09:09:36 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:02.617 09:09:36 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.617 09:09:36 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.617 09:09:36 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:02.617 09:09:36 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:02.617 09:09:36 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.618 09:09:36 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.618 09:09:36 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:02.618 09:09:36 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:02.618 09:09:36 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.618 09:09:36 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.618 09:09:36 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.618 09:09:36 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.618 09:09:36 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:02.618 09:09:36 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:02.618 09:09:36 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.618 09:09:36 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.618 09:09:36 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.618 09:09:36 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:02.618 09:09:36 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:02.618 09:09:36 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:02.618 09:09:36 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:02.618 09:09:36 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:02.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:02.618 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:02.618 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:02.618 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:02.618 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:02.618 09:09:37 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72443 00:16:02.618 09:09:37 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:02.618 09:09:37 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72443 00:16:02.618 09:09:37 ftl -- common/autotest_common.sh@835 -- # '[' -z 72443 ']' 00:16:02.618 09:09:37 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.618 09:09:37 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.618 09:09:37 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.618 09:09:37 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.618 09:09:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:02.618 [2024-11-20 09:09:37.462592] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:02.618 [2024-11-20 09:09:37.463264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72443 ] 00:16:02.618 [2024-11-20 09:09:37.620394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.618 [2024-11-20 09:09:37.739737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.618 09:09:38 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.618 09:09:38 ftl -- common/autotest_common.sh@868 -- # return 0 00:16:02.618 09:09:38 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:02.618 09:09:38 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@50 -- # break 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:02.618 09:09:39 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:02.618 09:09:40 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:02.618 09:09:40 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:02.618 09:09:40 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:02.618 09:09:40 ftl -- ftl/ftl.sh@63 -- # break 00:16:02.618 09:09:40 ftl -- ftl/ftl.sh@66 -- # killprocess 72443 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@954 -- # '[' -z 72443 ']' 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@958 -- # kill -0 72443 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@959 -- # uname 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72443 00:16:02.618 killing process with pid 72443 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72443' 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@973 -- # kill 72443 00:16:02.618 09:09:40 ftl -- common/autotest_common.sh@978 -- # wait 72443 00:16:02.618 09:09:41 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:02.618 09:09:41 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:02.618 09:09:41 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:02.618 09:09:41 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.618 09:09:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:02.618 ************************************ 00:16:02.618 START TEST ftl_fio_basic 00:16:02.618 ************************************ 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:02.618 * Looking for test storage... 00:16:02.618 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.618 --rc genhtml_branch_coverage=1 00:16:02.618 --rc genhtml_function_coverage=1 00:16:02.618 --rc genhtml_legend=1 00:16:02.618 --rc geninfo_all_blocks=1 00:16:02.618 --rc geninfo_unexecuted_blocks=1 00:16:02.618 00:16:02.618 ' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.618 --rc genhtml_branch_coverage=1 00:16:02.618 --rc genhtml_function_coverage=1 00:16:02.618 --rc genhtml_legend=1 00:16:02.618 --rc geninfo_all_blocks=1 00:16:02.618 --rc geninfo_unexecuted_blocks=1 00:16:02.618 00:16:02.618 ' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.618 --rc genhtml_branch_coverage=1 00:16:02.618 --rc genhtml_function_coverage=1 00:16:02.618 --rc genhtml_legend=1 00:16:02.618 --rc geninfo_all_blocks=1 00:16:02.618 --rc geninfo_unexecuted_blocks=1 00:16:02.618 00:16:02.618 ' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:02.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.618 --rc genhtml_branch_coverage=1 00:16:02.618 --rc genhtml_function_coverage=1 00:16:02.618 --rc genhtml_legend=1 00:16:02.618 --rc geninfo_all_blocks=1 00:16:02.618 --rc geninfo_unexecuted_blocks=1 00:16:02.618 00:16:02.618 ' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72570 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72570 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 72570 ']' 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:02.618 09:09:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.878 [2024-11-20 09:09:41.548558] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:16:02.878 [2024-11-20 09:09:41.548783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72570 ] 00:16:02.878 [2024-11-20 09:09:41.704837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.878 [2024-11-20 09:09:41.783362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.878 [2024-11-20 09:09:41.783654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.878 [2024-11-20 09:09:41.783692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:03.814 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:04.073 { 00:16:04.073 "name": "nvme0n1", 00:16:04.073 "aliases": [ 00:16:04.073 "4803cd92-a9f9-416b-b761-4327bc92dcf1" 00:16:04.073 ], 00:16:04.073 "product_name": "NVMe disk", 00:16:04.073 "block_size": 4096, 00:16:04.073 "num_blocks": 1310720, 00:16:04.073 "uuid": "4803cd92-a9f9-416b-b761-4327bc92dcf1", 00:16:04.073 "numa_id": -1, 00:16:04.073 "assigned_rate_limits": { 00:16:04.073 "rw_ios_per_sec": 0, 00:16:04.073 "rw_mbytes_per_sec": 0, 00:16:04.073 "r_mbytes_per_sec": 0, 00:16:04.073 "w_mbytes_per_sec": 0 00:16:04.073 }, 00:16:04.073 "claimed": false, 00:16:04.073 "zoned": false, 00:16:04.073 "supported_io_types": { 00:16:04.073 "read": true, 00:16:04.073 "write": true, 00:16:04.073 "unmap": true, 00:16:04.073 "flush": true, 00:16:04.073 "reset": true, 00:16:04.073 "nvme_admin": true, 00:16:04.073 "nvme_io": true, 00:16:04.073 "nvme_io_md": false, 00:16:04.073 "write_zeroes": true, 00:16:04.073 "zcopy": false, 00:16:04.073 "get_zone_info": false, 00:16:04.073 "zone_management": false, 00:16:04.073 "zone_append": false, 00:16:04.073 "compare": true, 00:16:04.073 "compare_and_write": false, 00:16:04.073 "abort": true, 00:16:04.073 "seek_hole": false, 00:16:04.073 "seek_data": false, 00:16:04.073 "copy": true, 00:16:04.073 "nvme_iov_md": false 00:16:04.073 }, 00:16:04.073 "driver_specific": { 00:16:04.073 "nvme": [ 00:16:04.073 { 00:16:04.073 "pci_address": "0000:00:11.0", 00:16:04.073 "trid": { 00:16:04.073 "trtype": "PCIe", 00:16:04.073 "traddr": "0000:00:11.0" 00:16:04.073 }, 00:16:04.073 "ctrlr_data": { 00:16:04.073 "cntlid": 0, 00:16:04.073 "vendor_id": "0x1b36", 00:16:04.073 "model_number": "QEMU NVMe Ctrl", 00:16:04.073 "serial_number": "12341", 00:16:04.073 "firmware_revision": "8.0.0", 00:16:04.073 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:04.073 "oacs": { 00:16:04.073 "security": 0, 00:16:04.073 "format": 1, 00:16:04.073 "firmware": 0, 00:16:04.073 "ns_manage": 1 00:16:04.073 }, 00:16:04.073 "multi_ctrlr": false, 00:16:04.073 "ana_reporting": false 00:16:04.073 }, 00:16:04.073 "vs": { 00:16:04.073 "nvme_version": "1.4" 00:16:04.073 }, 00:16:04.073 "ns_data": { 00:16:04.073 "id": 1, 00:16:04.073 "can_share": false 00:16:04.073 } 00:16:04.073 } 00:16:04.073 ], 00:16:04.073 "mp_policy": "active_passive" 00:16:04.073 } 00:16:04.073 } 00:16:04.073 ]' 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:04.073 09:09:42 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:04.334 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:04.334 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5fb315ab-8cd0-4a4d-b424-be502517d903 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5fb315ab-8cd0-4a4d-b424-be502517d903 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:04.596 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:04.854 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:04.854 { 00:16:04.854 "name": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:04.854 "aliases": [ 00:16:04.854 "lvs/nvme0n1p0" 00:16:04.854 ], 00:16:04.855 "product_name": "Logical Volume", 00:16:04.855 "block_size": 4096, 00:16:04.855 "num_blocks": 26476544, 00:16:04.855 "uuid": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:04.855 "assigned_rate_limits": { 00:16:04.855 "rw_ios_per_sec": 0, 00:16:04.855 "rw_mbytes_per_sec": 0, 00:16:04.855 "r_mbytes_per_sec": 0, 00:16:04.855 "w_mbytes_per_sec": 0 00:16:04.855 }, 00:16:04.855 "claimed": false, 00:16:04.855 "zoned": false, 00:16:04.855 "supported_io_types": { 00:16:04.855 "read": true, 00:16:04.855 "write": true, 00:16:04.855 "unmap": true, 00:16:04.855 "flush": false, 00:16:04.855 "reset": true, 00:16:04.855 "nvme_admin": false, 00:16:04.855 "nvme_io": false, 00:16:04.855 "nvme_io_md": false, 00:16:04.855 "write_zeroes": true, 00:16:04.855 "zcopy": false, 00:16:04.855 "get_zone_info": false, 00:16:04.855 "zone_management": false, 00:16:04.855 "zone_append": false, 00:16:04.855 "compare": false, 00:16:04.855 "compare_and_write": false, 00:16:04.855 "abort": false, 00:16:04.855 "seek_hole": true, 00:16:04.855 "seek_data": true, 00:16:04.855 "copy": false, 00:16:04.855 "nvme_iov_md": false 00:16:04.855 }, 00:16:04.855 "driver_specific": { 00:16:04.855 "lvol": { 00:16:04.855 "lvol_store_uuid": "5fb315ab-8cd0-4a4d-b424-be502517d903", 00:16:04.855 "base_bdev": "nvme0n1", 00:16:04.855 "thin_provision": true, 00:16:04.855 "num_allocated_clusters": 0, 00:16:04.855 "snapshot": false, 00:16:04.855 "clone": false, 00:16:04.855 "esnap_clone": false 00:16:04.855 } 00:16:04.855 } 00:16:04.855 } 00:16:04.855 ]' 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:04.855 09:09:43 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:05.113 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:05.114 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:05.372 { 00:16:05.372 "name": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:05.372 "aliases": [ 00:16:05.372 "lvs/nvme0n1p0" 00:16:05.372 ], 00:16:05.372 "product_name": "Logical Volume", 00:16:05.372 "block_size": 4096, 00:16:05.372 "num_blocks": 26476544, 00:16:05.372 "uuid": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:05.372 "assigned_rate_limits": { 00:16:05.372 "rw_ios_per_sec": 0, 00:16:05.372 "rw_mbytes_per_sec": 0, 00:16:05.372 "r_mbytes_per_sec": 0, 00:16:05.372 "w_mbytes_per_sec": 0 00:16:05.372 }, 00:16:05.372 "claimed": false, 00:16:05.372 "zoned": false, 00:16:05.372 "supported_io_types": { 00:16:05.372 "read": true, 00:16:05.372 "write": true, 00:16:05.372 "unmap": true, 00:16:05.372 "flush": false, 00:16:05.372 "reset": true, 00:16:05.372 "nvme_admin": false, 00:16:05.372 "nvme_io": false, 00:16:05.372 "nvme_io_md": false, 00:16:05.372 "write_zeroes": true, 00:16:05.372 "zcopy": false, 00:16:05.372 "get_zone_info": false, 00:16:05.372 "zone_management": false, 00:16:05.372 "zone_append": false, 00:16:05.372 "compare": false, 00:16:05.372 "compare_and_write": false, 00:16:05.372 "abort": false, 00:16:05.372 "seek_hole": true, 00:16:05.372 "seek_data": true, 00:16:05.372 "copy": false, 00:16:05.372 "nvme_iov_md": false 00:16:05.372 }, 00:16:05.372 "driver_specific": { 00:16:05.372 "lvol": { 00:16:05.372 "lvol_store_uuid": "5fb315ab-8cd0-4a4d-b424-be502517d903", 00:16:05.372 "base_bdev": "nvme0n1", 00:16:05.372 "thin_provision": true, 00:16:05.372 "num_allocated_clusters": 0, 00:16:05.372 "snapshot": false, 00:16:05.372 "clone": false, 00:16:05.372 "esnap_clone": false 00:16:05.372 } 00:16:05.372 } 00:16:05.372 } 00:16:05.372 ]' 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:05.372 09:09:44 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:05.631 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:16:05.631 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f22f91d9-3207-4863-963c-a25fdcd1dbb6 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:05.890 { 00:16:05.890 "name": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:05.890 "aliases": [ 00:16:05.890 "lvs/nvme0n1p0" 00:16:05.890 ], 00:16:05.890 "product_name": "Logical Volume", 00:16:05.890 "block_size": 4096, 00:16:05.890 "num_blocks": 26476544, 00:16:05.890 "uuid": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:05.890 "assigned_rate_limits": { 00:16:05.890 "rw_ios_per_sec": 0, 00:16:05.890 "rw_mbytes_per_sec": 0, 00:16:05.890 "r_mbytes_per_sec": 0, 00:16:05.890 "w_mbytes_per_sec": 0 00:16:05.890 }, 00:16:05.890 "claimed": false, 00:16:05.890 "zoned": false, 00:16:05.890 "supported_io_types": { 00:16:05.890 "read": true, 00:16:05.890 "write": true, 00:16:05.890 "unmap": true, 00:16:05.890 "flush": false, 00:16:05.890 "reset": true, 00:16:05.890 "nvme_admin": false, 00:16:05.890 "nvme_io": false, 00:16:05.890 "nvme_io_md": false, 00:16:05.890 "write_zeroes": true, 00:16:05.890 "zcopy": false, 00:16:05.890 "get_zone_info": false, 00:16:05.890 "zone_management": false, 00:16:05.890 "zone_append": false, 00:16:05.890 "compare": false, 00:16:05.890 "compare_and_write": false, 00:16:05.890 "abort": false, 00:16:05.890 "seek_hole": true, 00:16:05.890 "seek_data": true, 00:16:05.890 "copy": false, 00:16:05.890 "nvme_iov_md": false 00:16:05.890 }, 00:16:05.890 "driver_specific": { 00:16:05.890 "lvol": { 00:16:05.890 "lvol_store_uuid": "5fb315ab-8cd0-4a4d-b424-be502517d903", 00:16:05.890 "base_bdev": "nvme0n1", 00:16:05.890 "thin_provision": true, 00:16:05.890 "num_allocated_clusters": 0, 00:16:05.890 "snapshot": false, 00:16:05.890 "clone": false, 00:16:05.890 "esnap_clone": false 00:16:05.890 } 00:16:05.890 } 00:16:05.890 } 00:16:05.890 ]' 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:05.890 09:09:44 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f22f91d9-3207-4863-963c-a25fdcd1dbb6 -c nvc0n1p0 --l2p_dram_limit 60 00:16:06.149 [2024-11-20 09:09:44.911884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.149 [2024-11-20 09:09:44.911925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:06.149 [2024-11-20 09:09:44.911938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:06.149 [2024-11-20 09:09:44.911944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.149 [2024-11-20 09:09:44.911995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.912004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:06.150 [2024-11-20 09:09:44.912013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:06.150 [2024-11-20 09:09:44.912018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.912049] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:06.150 [2024-11-20 09:09:44.912606] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:06.150 [2024-11-20 09:09:44.912628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.912635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:06.150 [2024-11-20 09:09:44.912643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:16:06.150 [2024-11-20 09:09:44.912650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.912686] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 80a942c4-dccb-48d2-9039-690f7e4afdd0 00:16:06.150 [2024-11-20 09:09:44.913728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.913843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:06.150 [2024-11-20 09:09:44.913856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:06.150 [2024-11-20 09:09:44.913863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.919158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.919186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:06.150 [2024-11-20 09:09:44.919194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.190 ms 00:16:06.150 [2024-11-20 09:09:44.919201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.919286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.919295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:06.150 [2024-11-20 09:09:44.919301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:06.150 [2024-11-20 09:09:44.919311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.919367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.919376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:06.150 [2024-11-20 09:09:44.919383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:06.150 [2024-11-20 09:09:44.919390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.919421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:06.150 [2024-11-20 09:09:44.922376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.922400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:06.150 [2024-11-20 09:09:44.922410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:16:06.150 [2024-11-20 09:09:44.922418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.922449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.922456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:06.150 [2024-11-20 09:09:44.922464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:06.150 [2024-11-20 09:09:44.922469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.922490] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:06.150 [2024-11-20 09:09:44.922605] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:06.150 [2024-11-20 09:09:44.922617] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:06.150 [2024-11-20 09:09:44.922625] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:06.150 [2024-11-20 09:09:44.922635] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:06.150 [2024-11-20 09:09:44.922642] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:06.150 [2024-11-20 09:09:44.922649] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:06.150 [2024-11-20 09:09:44.922655] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:06.150 [2024-11-20 09:09:44.922662] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:06.150 [2024-11-20 09:09:44.922668] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:06.150 [2024-11-20 09:09:44.922676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.922683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:06.150 [2024-11-20 09:09:44.922692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:16:06.150 [2024-11-20 09:09:44.922697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.922773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.150 [2024-11-20 09:09:44.922779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:06.150 [2024-11-20 09:09:44.922786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:06.150 [2024-11-20 09:09:44.922792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.150 [2024-11-20 09:09:44.922897] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:06.150 [2024-11-20 09:09:44.922907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:06.150 [2024-11-20 09:09:44.922918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.150 [2024-11-20 09:09:44.922924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.150 [2024-11-20 09:09:44.922931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:06.150 [2024-11-20 09:09:44.922937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:06.150 [2024-11-20 09:09:44.922943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:06.150 [2024-11-20 09:09:44.922949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:06.150 [2024-11-20 09:09:44.922956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:06.150 [2024-11-20 09:09:44.922961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.150 [2024-11-20 09:09:44.922967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:06.150 [2024-11-20 09:09:44.922972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:06.150 [2024-11-20 09:09:44.922979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:06.150 [2024-11-20 09:09:44.922984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:06.150 [2024-11-20 09:09:44.922992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:06.150 [2024-11-20 09:09:44.922997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.150 [2024-11-20 09:09:44.923006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:06.150 [2024-11-20 09:09:44.923012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:06.150 [2024-11-20 09:09:44.923018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.150 [2024-11-20 09:09:44.923024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:06.150 [2024-11-20 09:09:44.923030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:06.150 [2024-11-20 09:09:44.923035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.150 [2024-11-20 09:09:44.923042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:06.150 [2024-11-20 09:09:44.923047] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:06.150 [2024-11-20 09:09:44.923053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.150 [2024-11-20 09:09:44.923061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:06.151 [2024-11-20 09:09:44.923068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:06.151 [2024-11-20 09:09:44.923073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.151 [2024-11-20 09:09:44.923079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:06.151 [2024-11-20 09:09:44.923085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:06.151 [2024-11-20 09:09:44.923091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:06.151 [2024-11-20 09:09:44.923096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:06.151 [2024-11-20 09:09:44.923104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:06.151 [2024-11-20 09:09:44.923109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.151 [2024-11-20 09:09:44.923115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:06.151 [2024-11-20 09:09:44.923130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:06.151 [2024-11-20 09:09:44.923136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:06.151 [2024-11-20 09:09:44.923141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:06.151 [2024-11-20 09:09:44.923148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:06.151 [2024-11-20 09:09:44.923152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.151 [2024-11-20 09:09:44.923159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:06.151 [2024-11-20 09:09:44.923164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:06.151 [2024-11-20 09:09:44.923171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.151 [2024-11-20 09:09:44.923176] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:06.151 [2024-11-20 09:09:44.923183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:06.151 [2024-11-20 09:09:44.923189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:06.151 [2024-11-20 09:09:44.923195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:06.151 [2024-11-20 09:09:44.923201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:06.151 [2024-11-20 09:09:44.923209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:06.151 [2024-11-20 09:09:44.923214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:06.151 [2024-11-20 09:09:44.923221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:06.151 [2024-11-20 09:09:44.923226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:06.151 [2024-11-20 09:09:44.923232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:06.151 [2024-11-20 09:09:44.923240] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:06.151 [2024-11-20 09:09:44.923248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:06.151 [2024-11-20 09:09:44.923262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:06.151 [2024-11-20 09:09:44.923270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:06.151 [2024-11-20 09:09:44.923276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:06.151 [2024-11-20 09:09:44.923282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:06.151 [2024-11-20 09:09:44.923289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:06.151 [2024-11-20 09:09:44.923295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:06.151 [2024-11-20 09:09:44.923302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:06.151 [2024-11-20 09:09:44.923307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:06.151 [2024-11-20 09:09:44.923315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:06.151 [2024-11-20 09:09:44.923346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:06.151 [2024-11-20 09:09:44.923354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923361] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:06.151 [2024-11-20 09:09:44.923369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:06.151 [2024-11-20 09:09:44.923375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:06.151 [2024-11-20 09:09:44.923382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:06.151 [2024-11-20 09:09:44.923388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.151 [2024-11-20 09:09:44.923395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:06.151 [2024-11-20 09:09:44.923401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:16:06.151 [2024-11-20 09:09:44.923408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.151 [2024-11-20 09:09:44.923482] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:06.151 [2024-11-20 09:09:44.923499] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:09.438 [2024-11-20 09:09:47.659884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.659939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:09.438 [2024-11-20 09:09:47.659955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2736.388 ms 00:16:09.438 [2024-11-20 09:09:47.659966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.685697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.685748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:09.438 [2024-11-20 09:09:47.685760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.521 ms 00:16:09.438 [2024-11-20 09:09:47.685770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.685909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.685922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:09.438 [2024-11-20 09:09:47.685931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:16:09.438 [2024-11-20 09:09:47.685942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.724707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.724750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:09.438 [2024-11-20 09:09:47.724766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.722 ms 00:16:09.438 [2024-11-20 09:09:47.724776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.724820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.724831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:09.438 [2024-11-20 09:09:47.724839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:09.438 [2024-11-20 09:09:47.724848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.725241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.725269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:09.438 [2024-11-20 09:09:47.725279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:09.438 [2024-11-20 09:09:47.725291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.725412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.725423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:09.438 [2024-11-20 09:09:47.725432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:16:09.438 [2024-11-20 09:09:47.725442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.743079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.743111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:09.438 [2024-11-20 09:09:47.743121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.610 ms 00:16:09.438 [2024-11-20 09:09:47.743130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.754493] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:09.438 [2024-11-20 09:09:47.769127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.769170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:09.438 [2024-11-20 09:09:47.769184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.884 ms 00:16:09.438 [2024-11-20 09:09:47.769193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.823567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.823718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:09.438 [2024-11-20 09:09:47.823743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.340 ms 00:16:09.438 [2024-11-20 09:09:47.823752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.823944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.823956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:09.438 [2024-11-20 09:09:47.823968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:16:09.438 [2024-11-20 09:09:47.823976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.846475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.846607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:09.438 [2024-11-20 09:09:47.846626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.443 ms 00:16:09.438 [2024-11-20 09:09:47.846634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.869136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.869256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:09.438 [2024-11-20 09:09:47.869275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.458 ms 00:16:09.438 [2024-11-20 09:09:47.869282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.869864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.869897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:09.438 [2024-11-20 09:09:47.869907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:16:09.438 [2024-11-20 09:09:47.869915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.934325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.934356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:09.438 [2024-11-20 09:09:47.934371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.371 ms 00:16:09.438 [2024-11-20 09:09:47.934381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.957839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.957882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:09.438 [2024-11-20 09:09:47.957895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.373 ms 00:16:09.438 [2024-11-20 09:09:47.957902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:47.980193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:47.980307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:09.438 [2024-11-20 09:09:47.980326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.246 ms 00:16:09.438 [2024-11-20 09:09:47.980334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:48.003335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:48.003470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:09.438 [2024-11-20 09:09:48.003488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.966 ms 00:16:09.438 [2024-11-20 09:09:48.003495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:48.003535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:48.003545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:09.438 [2024-11-20 09:09:48.003556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:09.438 [2024-11-20 09:09:48.003565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:48.003652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.438 [2024-11-20 09:09:48.003661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:09.438 [2024-11-20 09:09:48.003671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:09.438 [2024-11-20 09:09:48.003679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.438 [2024-11-20 09:09:48.004918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3092.602 ms, result 0 00:16:09.438 { 00:16:09.438 "name": "ftl0", 00:16:09.438 "uuid": "80a942c4-dccb-48d2-9039-690f7e4afdd0" 00:16:09.438 } 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:09.438 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:09.696 [ 00:16:09.696 { 00:16:09.696 "name": "ftl0", 00:16:09.696 "aliases": [ 00:16:09.696 "80a942c4-dccb-48d2-9039-690f7e4afdd0" 00:16:09.696 ], 00:16:09.696 "product_name": "FTL disk", 00:16:09.696 "block_size": 4096, 00:16:09.696 "num_blocks": 20971520, 00:16:09.696 "uuid": "80a942c4-dccb-48d2-9039-690f7e4afdd0", 00:16:09.696 "assigned_rate_limits": { 00:16:09.696 "rw_ios_per_sec": 0, 00:16:09.696 "rw_mbytes_per_sec": 0, 00:16:09.696 "r_mbytes_per_sec": 0, 00:16:09.696 "w_mbytes_per_sec": 0 00:16:09.696 }, 00:16:09.696 "claimed": false, 00:16:09.696 "zoned": false, 00:16:09.696 "supported_io_types": { 00:16:09.696 "read": true, 00:16:09.696 "write": true, 00:16:09.696 "unmap": true, 00:16:09.696 "flush": true, 00:16:09.696 "reset": false, 00:16:09.696 "nvme_admin": false, 00:16:09.696 "nvme_io": false, 00:16:09.696 "nvme_io_md": false, 00:16:09.696 "write_zeroes": true, 00:16:09.696 "zcopy": false, 00:16:09.696 "get_zone_info": false, 00:16:09.696 "zone_management": false, 00:16:09.696 "zone_append": false, 00:16:09.696 "compare": false, 00:16:09.696 "compare_and_write": false, 00:16:09.696 "abort": false, 00:16:09.696 "seek_hole": false, 00:16:09.696 "seek_data": false, 00:16:09.696 "copy": false, 00:16:09.696 "nvme_iov_md": false 00:16:09.696 }, 00:16:09.696 "driver_specific": { 00:16:09.696 "ftl": { 00:16:09.696 "base_bdev": "f22f91d9-3207-4863-963c-a25fdcd1dbb6", 00:16:09.696 "cache": "nvc0n1p0" 00:16:09.696 } 00:16:09.696 } 00:16:09.696 } 00:16:09.696 ] 00:16:09.696 09:09:48 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:16:09.696 09:09:48 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:09.696 09:09:48 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:09.954 09:09:48 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:09.954 09:09:48 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:09.954 [2024-11-20 09:09:48.845658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.954 [2024-11-20 09:09:48.845698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:09.954 [2024-11-20 09:09:48.845708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:09.954 [2024-11-20 09:09:48.845716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.954 [2024-11-20 09:09:48.845754] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:09.954 [2024-11-20 09:09:48.847878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.954 [2024-11-20 09:09:48.847902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:09.954 [2024-11-20 09:09:48.847912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.101 ms 00:16:09.954 [2024-11-20 09:09:48.847918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.954 [2024-11-20 09:09:48.848359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.954 [2024-11-20 09:09:48.848376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:09.954 [2024-11-20 09:09:48.848385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:16:09.954 [2024-11-20 09:09:48.848391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.954 [2024-11-20 09:09:48.850830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.954 [2024-11-20 09:09:48.850848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:09.955 [2024-11-20 09:09:48.850857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.414 ms 00:16:09.955 [2024-11-20 09:09:48.850864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:09.955 [2024-11-20 09:09:48.855594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:09.955 [2024-11-20 09:09:48.855614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:09.955 [2024-11-20 09:09:48.855625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.696 ms 00:16:09.955 [2024-11-20 09:09:48.855631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.874220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.874246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:10.214 [2024-11-20 09:09:48.874256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.509 ms 00:16:10.214 [2024-11-20 09:09:48.874262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.886719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.886745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:10.214 [2024-11-20 09:09:48.886756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.402 ms 00:16:10.214 [2024-11-20 09:09:48.886764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.886942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.886952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:10.214 [2024-11-20 09:09:48.886960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:10.214 [2024-11-20 09:09:48.886966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.904415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.904438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:10.214 [2024-11-20 09:09:48.904447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.426 ms 00:16:10.214 [2024-11-20 09:09:48.904453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.921393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.921415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:10.214 [2024-11-20 09:09:48.921425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.901 ms 00:16:10.214 [2024-11-20 09:09:48.921430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.938545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.938568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:10.214 [2024-11-20 09:09:48.938577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.078 ms 00:16:10.214 [2024-11-20 09:09:48.938582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.955366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.214 [2024-11-20 09:09:48.955475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:10.214 [2024-11-20 09:09:48.955491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.703 ms 00:16:10.214 [2024-11-20 09:09:48.955496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.214 [2024-11-20 09:09:48.955531] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:10.215 [2024-11-20 09:09:48.955541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.955996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:10.215 [2024-11-20 09:09:48.956054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:10.216 [2024-11-20 09:09:48.956226] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:10.216 [2024-11-20 09:09:48.956233] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 80a942c4-dccb-48d2-9039-690f7e4afdd0 00:16:10.216 [2024-11-20 09:09:48.956239] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:10.216 [2024-11-20 09:09:48.956246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:10.216 [2024-11-20 09:09:48.956252] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:10.216 [2024-11-20 09:09:48.956261] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:10.216 [2024-11-20 09:09:48.956266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:10.216 [2024-11-20 09:09:48.956273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:10.216 [2024-11-20 09:09:48.956278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:10.216 [2024-11-20 09:09:48.956285] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:10.216 [2024-11-20 09:09:48.956289] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:10.216 [2024-11-20 09:09:48.956296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.216 [2024-11-20 09:09:48.956302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:10.216 [2024-11-20 09:09:48.956310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:16:10.216 [2024-11-20 09:09:48.956315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:48.966199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.216 [2024-11-20 09:09:48.966285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:10.216 [2024-11-20 09:09:48.966367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.842 ms 00:16:10.216 [2024-11-20 09:09:48.966384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:48.966677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:10.216 [2024-11-20 09:09:48.966747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:10.216 [2024-11-20 09:09:48.966818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:16:10.216 [2024-11-20 09:09:48.966836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.001740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.001834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:10.216 [2024-11-20 09:09:49.001891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.001910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.001970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.002021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:10.216 [2024-11-20 09:09:49.002041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.002055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.002152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.002205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:10.216 [2024-11-20 09:09:49.002256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.002273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.002312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.002328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:10.216 [2024-11-20 09:09:49.002372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.002388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.065116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.065241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:10.216 [2024-11-20 09:09:49.065281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.065299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.113299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.113422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:10.216 [2024-11-20 09:09:49.113462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.113480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.113572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.113591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:10.216 [2024-11-20 09:09:49.113608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.113625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.113687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.113705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:10.216 [2024-11-20 09:09:49.113798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.113816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.113924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.216 [2024-11-20 09:09:49.113944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:10.216 [2024-11-20 09:09:49.113960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.216 [2024-11-20 09:09:49.113974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.216 [2024-11-20 09:09:49.114077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.217 [2024-11-20 09:09:49.114098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:10.217 [2024-11-20 09:09:49.114114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.217 [2024-11-20 09:09:49.114129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.217 [2024-11-20 09:09:49.114177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.217 [2024-11-20 09:09:49.114286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:10.217 [2024-11-20 09:09:49.114302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.217 [2024-11-20 09:09:49.114316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.217 [2024-11-20 09:09:49.114376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:10.217 [2024-11-20 09:09:49.114422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:10.217 [2024-11-20 09:09:49.114442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:10.217 [2024-11-20 09:09:49.114457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:10.217 [2024-11-20 09:09:49.114667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.982 ms, result 0 00:16:10.217 true 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72570 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 72570 ']' 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 72570 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72570 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72570' 00:16:10.475 killing process with pid 72570 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 72570 00:16:10.475 09:09:49 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 72570 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:15.747 09:09:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:16.006 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:16.006 fio-3.35 00:16:16.006 Starting 1 thread 00:16:22.589 00:16:22.589 test: (groupid=0, jobs=1): err= 0: pid=72756: Wed Nov 20 09:10:00 2024 00:16:22.589 read: IOPS=768, BW=51.0MiB/s (53.5MB/s)(255MiB/4990msec) 00:16:22.589 slat (nsec): min=3814, max=47905, avg=6214.83, stdev=3064.17 00:16:22.589 clat (usec): min=300, max=4993, avg=588.13, stdev=196.37 00:16:22.589 lat (usec): min=305, max=4998, avg=594.34, stdev=197.32 00:16:22.589 clat percentiles (usec): 00:16:22.589 | 1.00th=[ 334], 5.00th=[ 383], 10.00th=[ 404], 20.00th=[ 445], 00:16:22.589 | 30.00th=[ 482], 40.00th=[ 519], 50.00th=[ 545], 60.00th=[ 553], 00:16:22.589 | 70.00th=[ 603], 80.00th=[ 766], 90.00th=[ 898], 95.00th=[ 938], 00:16:22.589 | 99.00th=[ 1106], 99.50th=[ 1172], 99.90th=[ 1401], 99.95th=[ 1516], 00:16:22.589 | 99.99th=[ 5014] 00:16:22.589 write: IOPS=773, BW=51.4MiB/s (53.9MB/s)(256MiB/4985msec); 0 zone resets 00:16:22.589 slat (usec): min=14, max=138, avg=22.02, stdev= 7.25 00:16:22.589 clat (usec): min=342, max=1959, avg=668.70, stdev=198.08 00:16:22.589 lat (usec): min=357, max=1982, avg=690.72, stdev=200.43 00:16:22.589 clat percentiles (usec): 00:16:22.589 | 1.00th=[ 359], 5.00th=[ 445], 10.00th=[ 494], 20.00th=[ 510], 00:16:22.589 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:16:22.589 | 70.00th=[ 693], 80.00th=[ 865], 90.00th=[ 971], 95.00th=[ 1020], 00:16:22.589 | 99.00th=[ 1254], 99.50th=[ 1385], 99.90th=[ 1663], 99.95th=[ 1713], 00:16:22.589 | 99.99th=[ 1958] 00:16:22.589 bw ( KiB/s): min=39440, max=63240, per=98.42%, avg=51770.67, stdev=8307.14, samples=9 00:16:22.589 iops : min= 580, max= 930, avg=761.33, stdev=122.16, samples=9 00:16:22.589 lat (usec) : 500=25.89%, 750=51.02%, 1000=18.09% 00:16:22.589 lat (msec) : 2=4.98%, 10=0.01% 00:16:22.589 cpu : usr=99.14%, sys=0.06%, ctx=10, majf=0, minf=1169 00:16:22.589 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.589 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.589 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:22.589 00:16:22.589 Run status group 0 (all jobs): 00:16:22.589 READ: bw=51.0MiB/s (53.5MB/s), 51.0MiB/s-51.0MiB/s (53.5MB/s-53.5MB/s), io=255MiB (267MB), run=4990-4990msec 00:16:22.589 WRITE: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=256MiB (269MB), run=4985-4985msec 00:16:23.534 ----------------------------------------------------- 00:16:23.534 Suppressions used: 00:16:23.534 count bytes template 00:16:23.534 1 5 /usr/src/fio/parse.c 00:16:23.534 1 8 libtcmalloc_minimal.so 00:16:23.534 1 904 libcrypto.so 00:16:23.534 ----------------------------------------------------- 00:16:23.534 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:23.534 09:10:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:23.534 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:23.534 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:23.534 fio-3.35 00:16:23.534 Starting 2 threads 00:16:50.098 00:16:50.098 first_half: (groupid=0, jobs=1): err= 0: pid=72870: Wed Nov 20 09:10:27 2024 00:16:50.098 read: IOPS=2818, BW=11.0MiB/s (11.5MB/s)(255MiB/23173msec) 00:16:50.098 slat (nsec): min=3077, max=44325, avg=5000.91, stdev=1160.65 00:16:50.098 clat (usec): min=601, max=397096, avg=36427.75, stdev=20278.07 00:16:50.098 lat (usec): min=604, max=397101, avg=36432.75, stdev=20278.17 00:16:50.098 clat percentiles (msec): 00:16:50.098 | 1.00th=[ 18], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 30], 00:16:50.098 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:16:50.098 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 56], 00:16:50.098 | 99.00th=[ 142], 99.50th=[ 155], 99.90th=[ 271], 99.95th=[ 330], 00:16:50.098 | 99.99th=[ 388] 00:16:50.098 write: IOPS=3485, BW=13.6MiB/s (14.3MB/s)(256MiB/18805msec); 0 zone resets 00:16:50.098 slat (usec): min=3, max=2470, avg= 6.46, stdev=16.19 00:16:50.098 clat (usec): min=364, max=72245, avg=8939.34, stdev=13358.22 00:16:50.098 lat (usec): min=372, max=72252, avg=8945.81, stdev=13358.38 00:16:50.098 clat percentiles (usec): 00:16:50.098 | 1.00th=[ 676], 5.00th=[ 807], 10.00th=[ 979], 20.00th=[ 1385], 00:16:50.098 | 30.00th=[ 2638], 40.00th=[ 3458], 50.00th=[ 4490], 60.00th=[ 5473], 00:16:50.098 | 70.00th=[ 6915], 80.00th=[10814], 90.00th=[21890], 95.00th=[32637], 00:16:50.098 | 99.00th=[66323], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:16:50.098 | 99.99th=[71828] 00:16:50.098 bw ( KiB/s): min= 504, max=52464, per=96.08%, avg=23830.18, stdev=14552.19, samples=22 00:16:50.098 iops : min= 126, max=13116, avg=5957.55, stdev=3638.05, samples=22 00:16:50.098 lat (usec) : 500=0.03%, 750=1.53%, 1000=3.73% 00:16:50.098 lat (msec) : 2=7.05%, 4=10.50%, 10=16.16%, 20=5.59%, 50=50.34% 00:16:50.098 lat (msec) : 100=3.92%, 250=1.09%, 500=0.06% 00:16:50.098 cpu : usr=99.14%, sys=0.19%, ctx=224, majf=0, minf=5565 00:16:50.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:50.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.098 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.098 issued rwts: total=65307,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.098 second_half: (groupid=0, jobs=1): err= 0: pid=72871: Wed Nov 20 09:10:27 2024 00:16:50.098 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(255MiB/23452msec) 00:16:50.098 slat (nsec): min=2963, max=61270, avg=4370.72, stdev=1230.33 00:16:50.098 clat (usec): min=623, max=403063, avg=36399.82, stdev=23850.81 00:16:50.098 lat (usec): min=628, max=403068, avg=36404.19, stdev=23850.94 00:16:50.098 clat percentiles (msec): 00:16:50.098 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:16:50.098 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 34], 00:16:50.098 | 70.00th=[ 36], 80.00th=[ 38], 90.00th=[ 43], 95.00th=[ 57], 00:16:50.098 | 99.00th=[ 146], 99.50th=[ 182], 99.90th=[ 334], 99.95th=[ 355], 00:16:50.098 | 99.99th=[ 401] 00:16:50.098 write: IOPS=3100, BW=12.1MiB/s (12.7MB/s)(256MiB/21138msec); 0 zone resets 00:16:50.098 slat (usec): min=3, max=1643, avg= 6.01, stdev=10.37 00:16:50.098 clat (usec): min=361, max=72400, avg=9512.88, stdev=14088.05 00:16:50.098 lat (usec): min=367, max=72406, avg=9518.89, stdev=14088.43 00:16:50.098 clat percentiles (usec): 00:16:50.098 | 1.00th=[ 668], 5.00th=[ 775], 10.00th=[ 865], 20.00th=[ 1156], 00:16:50.098 | 30.00th=[ 2606], 40.00th=[ 3687], 50.00th=[ 4621], 60.00th=[ 5407], 00:16:50.098 | 70.00th=[ 6259], 80.00th=[11994], 90.00th=[25035], 95.00th=[35914], 00:16:50.098 | 99.00th=[67634], 99.50th=[69731], 99.90th=[70779], 99.95th=[71828], 00:16:50.098 | 99.99th=[71828] 00:16:50.098 bw ( KiB/s): min= 2304, max=51720, per=96.08%, avg=23831.27, stdev=14024.72, samples=22 00:16:50.098 iops : min= 576, max=12930, avg=5957.82, stdev=3506.18, samples=22 00:16:50.098 lat (usec) : 500=0.02%, 750=1.92%, 1000=5.67% 00:16:50.098 lat (msec) : 2=6.10%, 4=8.04%, 10=17.24%, 20=5.09%, 50=50.75% 00:16:50.098 lat (msec) : 100=3.90%, 250=1.14%, 500=0.12% 00:16:50.098 cpu : usr=99.32%, sys=0.14%, ctx=38, majf=0, minf=5538 00:16:50.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:50.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.098 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:50.098 issued rwts: total=65310,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:50.098 00:16:50.098 Run status group 0 (all jobs): 00:16:50.098 READ: bw=21.8MiB/s (22.8MB/s), 10.9MiB/s-11.0MiB/s (11.4MB/s-11.5MB/s), io=510MiB (535MB), run=23173-23452msec 00:16:50.098 WRITE: bw=24.2MiB/s (25.4MB/s), 12.1MiB/s-13.6MiB/s (12.7MB/s-14.3MB/s), io=512MiB (537MB), run=18805-21138msec 00:16:50.360 ----------------------------------------------------- 00:16:50.360 Suppressions used: 00:16:50.360 count bytes template 00:16:50.360 2 10 /usr/src/fio/parse.c 00:16:50.360 3 288 /usr/src/fio/iolog.c 00:16:50.360 1 8 libtcmalloc_minimal.so 00:16:50.360 1 904 libcrypto.so 00:16:50.360 ----------------------------------------------------- 00:16:50.360 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:50.360 09:10:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:50.621 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:50.621 fio-3.35 00:16:50.621 Starting 1 thread 00:17:05.568 00:17:05.568 test: (groupid=0, jobs=1): err= 0: pid=73182: Wed Nov 20 09:10:43 2024 00:17:05.568 read: IOPS=8094, BW=31.6MiB/s (33.2MB/s)(255MiB/8055msec) 00:17:05.568 slat (nsec): min=2978, max=19891, avg=4595.46, stdev=987.28 00:17:05.568 clat (usec): min=525, max=28603, avg=15805.01, stdev=1828.34 00:17:05.568 lat (usec): min=529, max=28606, avg=15809.61, stdev=1828.39 00:17:05.568 clat percentiles (usec): 00:17:05.568 | 1.00th=[13698], 5.00th=[13960], 10.00th=[14091], 20.00th=[14484], 00:17:05.568 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15926], 00:17:05.568 | 70.00th=[16188], 80.00th=[16319], 90.00th=[16581], 95.00th=[17957], 00:17:05.568 | 99.00th=[24773], 99.50th=[25560], 99.90th=[27657], 99.95th=[27919], 00:17:05.568 | 99.99th=[28443] 00:17:05.568 write: IOPS=12.2k, BW=47.7MiB/s (50.0MB/s)(256MiB/5371msec); 0 zone resets 00:17:05.568 slat (usec): min=4, max=163, avg= 7.66, stdev= 3.49 00:17:05.568 clat (usec): min=492, max=52081, avg=10444.94, stdev=10820.75 00:17:05.568 lat (usec): min=497, max=52087, avg=10452.61, stdev=10821.08 00:17:05.568 clat percentiles (usec): 00:17:05.568 | 1.00th=[ 619], 5.00th=[ 685], 10.00th=[ 734], 20.00th=[ 898], 00:17:05.568 | 30.00th=[ 1172], 40.00th=[ 1532], 50.00th=[ 6783], 60.00th=[11469], 00:17:05.568 | 70.00th=[14615], 80.00th=[17433], 90.00th=[28705], 95.00th=[33817], 00:17:05.568 | 99.00th=[36963], 99.50th=[39584], 99.90th=[45876], 99.95th=[46400], 00:17:05.568 | 99.99th=[49546] 00:17:05.568 bw ( KiB/s): min=30912, max=63712, per=97.65%, avg=47662.55, stdev=12135.45, samples=11 00:17:05.568 iops : min= 7728, max=15928, avg=11915.64, stdev=3033.86, samples=11 00:17:05.568 lat (usec) : 500=0.01%, 750=5.74%, 1000=5.74% 00:17:05.568 lat (msec) : 2=9.04%, 4=0.64%, 10=6.81%, 20=62.09%, 50=9.94% 00:17:05.568 lat (msec) : 100=0.01% 00:17:05.568 cpu : usr=98.96%, sys=0.27%, ctx=27, majf=0, minf=5565 00:17:05.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:05.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.568 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.568 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.568 00:17:05.569 Run status group 0 (all jobs): 00:17:05.569 READ: bw=31.6MiB/s (33.2MB/s), 31.6MiB/s-31.6MiB/s (33.2MB/s-33.2MB/s), io=255MiB (267MB), run=8055-8055msec 00:17:05.569 WRITE: bw=47.7MiB/s (50.0MB/s), 47.7MiB/s-47.7MiB/s (50.0MB/s-50.0MB/s), io=256MiB (268MB), run=5371-5371msec 00:17:06.954 ----------------------------------------------------- 00:17:06.954 Suppressions used: 00:17:06.954 count bytes template 00:17:06.954 1 5 /usr/src/fio/parse.c 00:17:06.954 2 192 /usr/src/fio/iolog.c 00:17:06.954 1 8 libtcmalloc_minimal.so 00:17:06.954 1 904 libcrypto.so 00:17:06.954 ----------------------------------------------------- 00:17:06.954 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:06.954 Remove shared memory files 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57111 /dev/shm/spdk_tgt_trace.pid71482 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:06.954 ************************************ 00:17:06.954 END TEST ftl_fio_basic 00:17:06.954 ************************************ 00:17:06.954 00:17:06.954 real 1m4.296s 00:17:06.954 user 2m19.890s 00:17:06.954 sys 0m2.816s 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.954 09:10:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 09:10:45 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:06.954 09:10:45 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:06.954 09:10:45 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.954 09:10:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 ************************************ 00:17:06.954 START TEST ftl_bdevperf 00:17:06.954 ************************************ 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:06.954 * Looking for test storage... 00:17:06.954 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.954 --rc genhtml_branch_coverage=1 00:17:06.954 --rc genhtml_function_coverage=1 00:17:06.954 --rc genhtml_legend=1 00:17:06.954 --rc geninfo_all_blocks=1 00:17:06.954 --rc geninfo_unexecuted_blocks=1 00:17:06.954 00:17:06.954 ' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.954 --rc genhtml_branch_coverage=1 00:17:06.954 --rc genhtml_function_coverage=1 00:17:06.954 --rc genhtml_legend=1 00:17:06.954 --rc geninfo_all_blocks=1 00:17:06.954 --rc geninfo_unexecuted_blocks=1 00:17:06.954 00:17:06.954 ' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.954 --rc genhtml_branch_coverage=1 00:17:06.954 --rc genhtml_function_coverage=1 00:17:06.954 --rc genhtml_legend=1 00:17:06.954 --rc geninfo_all_blocks=1 00:17:06.954 --rc geninfo_unexecuted_blocks=1 00:17:06.954 00:17:06.954 ' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:06.954 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.954 --rc genhtml_branch_coverage=1 00:17:06.954 --rc genhtml_function_coverage=1 00:17:06.954 --rc genhtml_legend=1 00:17:06.954 --rc geninfo_all_blocks=1 00:17:06.954 --rc geninfo_unexecuted_blocks=1 00:17:06.954 00:17:06.954 ' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.954 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73424 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73424 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 73424 ']' 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:06.955 09:10:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:07.215 [2024-11-20 09:10:45.912180] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:07.216 [2024-11-20 09:10:45.912596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73424 ] 00:17:07.216 [2024-11-20 09:10:46.076597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.477 [2024-11-20 09:10:46.200951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:08.050 09:10:46 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:08.311 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:08.574 { 00:17:08.574 "name": "nvme0n1", 00:17:08.574 "aliases": [ 00:17:08.574 "aab87e69-ab40-4ee8-9bce-14788e486605" 00:17:08.574 ], 00:17:08.574 "product_name": "NVMe disk", 00:17:08.574 "block_size": 4096, 00:17:08.574 "num_blocks": 1310720, 00:17:08.574 "uuid": "aab87e69-ab40-4ee8-9bce-14788e486605", 00:17:08.574 "numa_id": -1, 00:17:08.574 "assigned_rate_limits": { 00:17:08.574 "rw_ios_per_sec": 0, 00:17:08.574 "rw_mbytes_per_sec": 0, 00:17:08.574 "r_mbytes_per_sec": 0, 00:17:08.574 "w_mbytes_per_sec": 0 00:17:08.574 }, 00:17:08.574 "claimed": true, 00:17:08.574 "claim_type": "read_many_write_one", 00:17:08.574 "zoned": false, 00:17:08.574 "supported_io_types": { 00:17:08.574 "read": true, 00:17:08.574 "write": true, 00:17:08.574 "unmap": true, 00:17:08.574 "flush": true, 00:17:08.574 "reset": true, 00:17:08.574 "nvme_admin": true, 00:17:08.574 "nvme_io": true, 00:17:08.574 "nvme_io_md": false, 00:17:08.574 "write_zeroes": true, 00:17:08.574 "zcopy": false, 00:17:08.574 "get_zone_info": false, 00:17:08.574 "zone_management": false, 00:17:08.574 "zone_append": false, 00:17:08.574 "compare": true, 00:17:08.574 "compare_and_write": false, 00:17:08.574 "abort": true, 00:17:08.574 "seek_hole": false, 00:17:08.574 "seek_data": false, 00:17:08.574 "copy": true, 00:17:08.574 "nvme_iov_md": false 00:17:08.574 }, 00:17:08.574 "driver_specific": { 00:17:08.574 "nvme": [ 00:17:08.574 { 00:17:08.574 "pci_address": "0000:00:11.0", 00:17:08.574 "trid": { 00:17:08.574 "trtype": "PCIe", 00:17:08.574 "traddr": "0000:00:11.0" 00:17:08.574 }, 00:17:08.574 "ctrlr_data": { 00:17:08.574 "cntlid": 0, 00:17:08.574 "vendor_id": "0x1b36", 00:17:08.574 "model_number": "QEMU NVMe Ctrl", 00:17:08.574 "serial_number": "12341", 00:17:08.574 "firmware_revision": "8.0.0", 00:17:08.574 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:08.574 "oacs": { 00:17:08.574 "security": 0, 00:17:08.574 "format": 1, 00:17:08.574 "firmware": 0, 00:17:08.574 "ns_manage": 1 00:17:08.574 }, 00:17:08.574 "multi_ctrlr": false, 00:17:08.574 "ana_reporting": false 00:17:08.574 }, 00:17:08.574 "vs": { 00:17:08.574 "nvme_version": "1.4" 00:17:08.574 }, 00:17:08.574 "ns_data": { 00:17:08.574 "id": 1, 00:17:08.574 "can_share": false 00:17:08.574 } 00:17:08.574 } 00:17:08.574 ], 00:17:08.574 "mp_policy": "active_passive" 00:17:08.574 } 00:17:08.574 } 00:17:08.574 ]' 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:08.574 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:08.835 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5fb315ab-8cd0-4a4d-b424-be502517d903 00:17:08.835 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:08.835 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fb315ab-8cd0-4a4d-b424-be502517d903 00:17:09.098 09:10:47 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:09.098 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39 00:17:09.098 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:09.360 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:09.621 { 00:17:09.621 "name": "77e24ea8-a34c-4810-88b4-4c5ecadfaf16", 00:17:09.621 "aliases": [ 00:17:09.621 "lvs/nvme0n1p0" 00:17:09.621 ], 00:17:09.621 "product_name": "Logical Volume", 00:17:09.621 "block_size": 4096, 00:17:09.621 "num_blocks": 26476544, 00:17:09.621 "uuid": "77e24ea8-a34c-4810-88b4-4c5ecadfaf16", 00:17:09.621 "assigned_rate_limits": { 00:17:09.621 "rw_ios_per_sec": 0, 00:17:09.621 "rw_mbytes_per_sec": 0, 00:17:09.621 "r_mbytes_per_sec": 0, 00:17:09.621 "w_mbytes_per_sec": 0 00:17:09.621 }, 00:17:09.621 "claimed": false, 00:17:09.621 "zoned": false, 00:17:09.621 "supported_io_types": { 00:17:09.621 "read": true, 00:17:09.621 "write": true, 00:17:09.621 "unmap": true, 00:17:09.621 "flush": false, 00:17:09.621 "reset": true, 00:17:09.621 "nvme_admin": false, 00:17:09.621 "nvme_io": false, 00:17:09.621 "nvme_io_md": false, 00:17:09.621 "write_zeroes": true, 00:17:09.621 "zcopy": false, 00:17:09.621 "get_zone_info": false, 00:17:09.621 "zone_management": false, 00:17:09.621 "zone_append": false, 00:17:09.621 "compare": false, 00:17:09.621 "compare_and_write": false, 00:17:09.621 "abort": false, 00:17:09.621 "seek_hole": true, 00:17:09.621 "seek_data": true, 00:17:09.621 "copy": false, 00:17:09.621 "nvme_iov_md": false 00:17:09.621 }, 00:17:09.621 "driver_specific": { 00:17:09.621 "lvol": { 00:17:09.621 "lvol_store_uuid": "0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39", 00:17:09.621 "base_bdev": "nvme0n1", 00:17:09.621 "thin_provision": true, 00:17:09.621 "num_allocated_clusters": 0, 00:17:09.621 "snapshot": false, 00:17:09.621 "clone": false, 00:17:09.621 "esnap_clone": false 00:17:09.621 } 00:17:09.621 } 00:17:09.621 } 00:17:09.621 ]' 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:09.621 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:09.883 09:10:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:10.145 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:10.145 { 00:17:10.145 "name": "77e24ea8-a34c-4810-88b4-4c5ecadfaf16", 00:17:10.145 "aliases": [ 00:17:10.145 "lvs/nvme0n1p0" 00:17:10.145 ], 00:17:10.145 "product_name": "Logical Volume", 00:17:10.145 "block_size": 4096, 00:17:10.145 "num_blocks": 26476544, 00:17:10.145 "uuid": "77e24ea8-a34c-4810-88b4-4c5ecadfaf16", 00:17:10.145 "assigned_rate_limits": { 00:17:10.145 "rw_ios_per_sec": 0, 00:17:10.145 "rw_mbytes_per_sec": 0, 00:17:10.145 "r_mbytes_per_sec": 0, 00:17:10.145 "w_mbytes_per_sec": 0 00:17:10.145 }, 00:17:10.145 "claimed": false, 00:17:10.145 "zoned": false, 00:17:10.145 "supported_io_types": { 00:17:10.145 "read": true, 00:17:10.145 "write": true, 00:17:10.145 "unmap": true, 00:17:10.145 "flush": false, 00:17:10.145 "reset": true, 00:17:10.145 "nvme_admin": false, 00:17:10.145 "nvme_io": false, 00:17:10.145 "nvme_io_md": false, 00:17:10.145 "write_zeroes": true, 00:17:10.145 "zcopy": false, 00:17:10.145 "get_zone_info": false, 00:17:10.145 "zone_management": false, 00:17:10.145 "zone_append": false, 00:17:10.145 "compare": false, 00:17:10.145 "compare_and_write": false, 00:17:10.145 "abort": false, 00:17:10.145 "seek_hole": true, 00:17:10.145 "seek_data": true, 00:17:10.145 "copy": false, 00:17:10.145 "nvme_iov_md": false 00:17:10.145 }, 00:17:10.145 "driver_specific": { 00:17:10.145 "lvol": { 00:17:10.145 "lvol_store_uuid": "0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39", 00:17:10.145 "base_bdev": "nvme0n1", 00:17:10.145 "thin_provision": true, 00:17:10.145 "num_allocated_clusters": 0, 00:17:10.145 "snapshot": false, 00:17:10.145 "clone": false, 00:17:10.145 "esnap_clone": false 00:17:10.145 } 00:17:10.145 } 00:17:10.145 } 00:17:10.145 ]' 00:17:10.145 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:10.145 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:10.145 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:10.404 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:10.404 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:10.404 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:10.404 09:10:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:10.404 09:10:49 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:17:10.405 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 00:17:10.664 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:10.665 { 00:17:10.665 "name": "77e24ea8-a34c-4810-88b4-4c5ecadfaf16", 00:17:10.665 "aliases": [ 00:17:10.665 "lvs/nvme0n1p0" 00:17:10.665 ], 00:17:10.665 "product_name": "Logical Volume", 00:17:10.665 "block_size": 4096, 00:17:10.665 "num_blocks": 26476544, 00:17:10.665 "uuid": "77e24ea8-a34c-4810-88b4-4c5ecadfaf16", 00:17:10.665 "assigned_rate_limits": { 00:17:10.665 "rw_ios_per_sec": 0, 00:17:10.665 "rw_mbytes_per_sec": 0, 00:17:10.665 "r_mbytes_per_sec": 0, 00:17:10.665 "w_mbytes_per_sec": 0 00:17:10.665 }, 00:17:10.665 "claimed": false, 00:17:10.665 "zoned": false, 00:17:10.665 "supported_io_types": { 00:17:10.665 "read": true, 00:17:10.665 "write": true, 00:17:10.665 "unmap": true, 00:17:10.665 "flush": false, 00:17:10.665 "reset": true, 00:17:10.665 "nvme_admin": false, 00:17:10.665 "nvme_io": false, 00:17:10.665 "nvme_io_md": false, 00:17:10.665 "write_zeroes": true, 00:17:10.665 "zcopy": false, 00:17:10.665 "get_zone_info": false, 00:17:10.665 "zone_management": false, 00:17:10.665 "zone_append": false, 00:17:10.665 "compare": false, 00:17:10.665 "compare_and_write": false, 00:17:10.665 "abort": false, 00:17:10.665 "seek_hole": true, 00:17:10.665 "seek_data": true, 00:17:10.665 "copy": false, 00:17:10.665 "nvme_iov_md": false 00:17:10.665 }, 00:17:10.665 "driver_specific": { 00:17:10.665 "lvol": { 00:17:10.665 "lvol_store_uuid": "0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39", 00:17:10.665 "base_bdev": "nvme0n1", 00:17:10.665 "thin_provision": true, 00:17:10.665 "num_allocated_clusters": 0, 00:17:10.665 "snapshot": false, 00:17:10.665 "clone": false, 00:17:10.665 "esnap_clone": false 00:17:10.665 } 00:17:10.665 } 00:17:10.665 } 00:17:10.665 ]' 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:10.665 09:10:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 77e24ea8-a34c-4810-88b4-4c5ecadfaf16 -c nvc0n1p0 --l2p_dram_limit 20 00:17:10.924 [2024-11-20 09:10:49.718452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.718513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:10.925 [2024-11-20 09:10:49.718529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:10.925 [2024-11-20 09:10:49.718540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.718606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.718622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.925 [2024-11-20 09:10:49.718630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:10.925 [2024-11-20 09:10:49.718640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.718658] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:10.925 [2024-11-20 09:10:49.719689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:10.925 [2024-11-20 09:10:49.719731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.719745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.925 [2024-11-20 09:10:49.719757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.078 ms 00:17:10.925 [2024-11-20 09:10:49.719769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.719859] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b38b87da-463a-467a-8d7a-1ca3be6ee8d5 00:17:10.925 [2024-11-20 09:10:49.721397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.721435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:10.925 [2024-11-20 09:10:49.721448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:10.925 [2024-11-20 09:10:49.721459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.729718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.729909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.925 [2024-11-20 09:10:49.729931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.208 ms 00:17:10.925 [2024-11-20 09:10:49.729940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.730040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.730050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.925 [2024-11-20 09:10:49.730065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:10.925 [2024-11-20 09:10:49.730073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.730145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.730156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:10.925 [2024-11-20 09:10:49.730167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:10.925 [2024-11-20 09:10:49.730175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.730197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:10.925 [2024-11-20 09:10:49.734346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.734381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.925 [2024-11-20 09:10:49.734391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.156 ms 00:17:10.925 [2024-11-20 09:10:49.734402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.734440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.734450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:10.925 [2024-11-20 09:10:49.734459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:10.925 [2024-11-20 09:10:49.734470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.734508] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:10.925 [2024-11-20 09:10:49.734657] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:10.925 [2024-11-20 09:10:49.734671] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:10.925 [2024-11-20 09:10:49.734684] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:10.925 [2024-11-20 09:10:49.734696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:10.925 [2024-11-20 09:10:49.734707] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:10.925 [2024-11-20 09:10:49.734715] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:10.925 [2024-11-20 09:10:49.734725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:10.925 [2024-11-20 09:10:49.734733] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:10.925 [2024-11-20 09:10:49.734742] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:10.925 [2024-11-20 09:10:49.734750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.734763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:10.925 [2024-11-20 09:10:49.734772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:17:10.925 [2024-11-20 09:10:49.734781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.734862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.925 [2024-11-20 09:10:49.734893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:10.925 [2024-11-20 09:10:49.734902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:10.925 [2024-11-20 09:10:49.734913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.925 [2024-11-20 09:10:49.735005] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:10.925 [2024-11-20 09:10:49.735019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:10.925 [2024-11-20 09:10:49.735031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.925 [2024-11-20 09:10:49.735042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:10.925 [2024-11-20 09:10:49.735059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:10.925 [2024-11-20 09:10:49.735077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:10.925 [2024-11-20 09:10:49.735084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.925 [2024-11-20 09:10:49.735100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:10.925 [2024-11-20 09:10:49.735110] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:10.925 [2024-11-20 09:10:49.735117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:10.925 [2024-11-20 09:10:49.735134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:10.925 [2024-11-20 09:10:49.735141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:10.925 [2024-11-20 09:10:49.735153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:10.925 [2024-11-20 09:10:49.735169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:10.925 [2024-11-20 09:10:49.735175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:10.925 [2024-11-20 09:10:49.735203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.925 [2024-11-20 09:10:49.735220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:10.925 [2024-11-20 09:10:49.735230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.925 [2024-11-20 09:10:49.735246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:10.925 [2024-11-20 09:10:49.735254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:10.925 [2024-11-20 09:10:49.735266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.925 [2024-11-20 09:10:49.735273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:10.925 [2024-11-20 09:10:49.735283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:10.926 [2024-11-20 09:10:49.735291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:10.926 [2024-11-20 09:10:49.735302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:10.926 [2024-11-20 09:10:49.735309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:10.926 [2024-11-20 09:10:49.735318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.926 [2024-11-20 09:10:49.735325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:10.926 [2024-11-20 09:10:49.735334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:10.926 [2024-11-20 09:10:49.735341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:10.926 [2024-11-20 09:10:49.735351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:10.926 [2024-11-20 09:10:49.735358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:10.926 [2024-11-20 09:10:49.735366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.926 [2024-11-20 09:10:49.735373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:10.926 [2024-11-20 09:10:49.735382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:10.926 [2024-11-20 09:10:49.735389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.926 [2024-11-20 09:10:49.735397] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:10.926 [2024-11-20 09:10:49.735405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:10.926 [2024-11-20 09:10:49.735415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:10.926 [2024-11-20 09:10:49.735423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:10.926 [2024-11-20 09:10:49.735436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:10.926 [2024-11-20 09:10:49.735443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:10.926 [2024-11-20 09:10:49.735452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:10.926 [2024-11-20 09:10:49.735459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:10.926 [2024-11-20 09:10:49.735467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:10.926 [2024-11-20 09:10:49.735474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:10.926 [2024-11-20 09:10:49.735486] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:10.926 [2024-11-20 09:10:49.735496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:10.926 [2024-11-20 09:10:49.735513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:10.926 [2024-11-20 09:10:49.735524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:10.926 [2024-11-20 09:10:49.735532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:10.926 [2024-11-20 09:10:49.735542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:10.926 [2024-11-20 09:10:49.735549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:10.926 [2024-11-20 09:10:49.735560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:10.926 [2024-11-20 09:10:49.735568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:10.926 [2024-11-20 09:10:49.735579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:10.926 [2024-11-20 09:10:49.735587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:10.926 [2024-11-20 09:10:49.735632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:10.926 [2024-11-20 09:10:49.735641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:10.926 [2024-11-20 09:10:49.735660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:10.926 [2024-11-20 09:10:49.735669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:10.926 [2024-11-20 09:10:49.735677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:10.926 [2024-11-20 09:10:49.735686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.926 [2024-11-20 09:10:49.735696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:10.926 [2024-11-20 09:10:49.735706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:17:10.926 [2024-11-20 09:10:49.735714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.926 [2024-11-20 09:10:49.735749] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:10.926 [2024-11-20 09:10:49.735775] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:15.131 [2024-11-20 09:10:53.594914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.595202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:15.131 [2024-11-20 09:10:53.595245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3859.139 ms 00:17:15.131 [2024-11-20 09:10:53.595257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.632998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.633055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.131 [2024-11-20 09:10:53.633075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.478 ms 00:17:15.131 [2024-11-20 09:10:53.633085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.633249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.633263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.131 [2024-11-20 09:10:53.633279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:15.131 [2024-11-20 09:10:53.633288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.683598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.683656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.131 [2024-11-20 09:10:53.683677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.238 ms 00:17:15.131 [2024-11-20 09:10:53.683687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.683736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.683751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.131 [2024-11-20 09:10:53.683763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:15.131 [2024-11-20 09:10:53.683771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.684533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.684584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.131 [2024-11-20 09:10:53.684598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:17:15.131 [2024-11-20 09:10:53.684607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.684743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.684753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.131 [2024-11-20 09:10:53.684769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:17:15.131 [2024-11-20 09:10:53.684779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.703303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.703349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.131 [2024-11-20 09:10:53.703365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.502 ms 00:17:15.131 [2024-11-20 09:10:53.703374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.718313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:15.131 [2024-11-20 09:10:53.727606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.727659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.131 [2024-11-20 09:10:53.727672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:17:15.131 [2024-11-20 09:10:53.727684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.826686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.826754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:15.131 [2024-11-20 09:10:53.826772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.970 ms 00:17:15.131 [2024-11-20 09:10:53.826784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.827045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.827068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.131 [2024-11-20 09:10:53.827079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:17:15.131 [2024-11-20 09:10:53.827091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.853913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.854154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:15.131 [2024-11-20 09:10:53.854179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.765 ms 00:17:15.131 [2024-11-20 09:10:53.854191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.879636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.879690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:15.131 [2024-11-20 09:10:53.879704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.399 ms 00:17:15.131 [2024-11-20 09:10:53.879715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.880370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.880396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.131 [2024-11-20 09:10:53.880407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:17:15.131 [2024-11-20 09:10:53.880418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.965868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.965935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:15.131 [2024-11-20 09:10:53.965949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.408 ms 00:17:15.131 [2024-11-20 09:10:53.965961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:53.994795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:53.994853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:15.131 [2024-11-20 09:10:53.994867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.743 ms 00:17:15.131 [2024-11-20 09:10:53.994897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.131 [2024-11-20 09:10:54.021079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.131 [2024-11-20 09:10:54.021298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:15.131 [2024-11-20 09:10:54.021333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.134 ms 00:17:15.131 [2024-11-20 09:10:54.021345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.392 [2024-11-20 09:10:54.047432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.392 [2024-11-20 09:10:54.047489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:15.392 [2024-11-20 09:10:54.047503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.041 ms 00:17:15.392 [2024-11-20 09:10:54.047514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.392 [2024-11-20 09:10:54.047568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.392 [2024-11-20 09:10:54.047586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:15.392 [2024-11-20 09:10:54.047596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:15.392 [2024-11-20 09:10:54.047607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.392 [2024-11-20 09:10:54.047721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.392 [2024-11-20 09:10:54.047737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:15.392 [2024-11-20 09:10:54.047748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:15.392 [2024-11-20 09:10:54.047759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.393 [2024-11-20 09:10:54.049252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4330.177 ms, result 0 00:17:15.393 { 00:17:15.393 "name": "ftl0", 00:17:15.393 "uuid": "b38b87da-463a-467a-8d7a-1ca3be6ee8d5" 00:17:15.393 } 00:17:15.393 09:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:15.393 09:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:15.393 09:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:15.393 09:10:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:15.655 [2024-11-20 09:10:54.393162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:15.655 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:15.655 Zero copy mechanism will not be used. 00:17:15.655 Running I/O for 4 seconds... 00:17:17.544 966.00 IOPS, 64.15 MiB/s [2024-11-20T09:10:57.406Z] 1111.00 IOPS, 73.78 MiB/s [2024-11-20T09:10:58.798Z] 1143.33 IOPS, 75.92 MiB/s [2024-11-20T09:10:58.798Z] 1051.00 IOPS, 69.79 MiB/s 00:17:19.879 Latency(us) 00:17:19.879 [2024-11-20T09:10:58.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.879 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:19.879 ftl0 : 4.00 1050.68 69.77 0.00 0.00 1005.81 220.55 2495.41 00:17:19.879 [2024-11-20T09:10:58.798Z] =================================================================================================================== 00:17:19.879 [2024-11-20T09:10:58.798Z] Total : 1050.68 69.77 0.00 0.00 1005.81 220.55 2495.41 00:17:19.879 [2024-11-20 09:10:58.403621] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:19.879 { 00:17:19.879 "results": [ 00:17:19.879 { 00:17:19.879 "job": "ftl0", 00:17:19.879 "core_mask": "0x1", 00:17:19.879 "workload": "randwrite", 00:17:19.879 "status": "finished", 00:17:19.879 "queue_depth": 1, 00:17:19.879 "io_size": 69632, 00:17:19.879 "runtime": 4.002159, 00:17:19.879 "iops": 1050.682893908013, 00:17:19.879 "mibps": 69.77191092357899, 00:17:19.879 "io_failed": 0, 00:17:19.879 "io_timeout": 0, 00:17:19.879 "avg_latency_us": 1005.8079604866002, 00:17:19.879 "min_latency_us": 220.55384615384617, 00:17:19.879 "max_latency_us": 2495.409230769231 00:17:19.879 } 00:17:19.879 ], 00:17:19.879 "core_count": 1 00:17:19.879 } 00:17:19.879 09:10:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:19.879 [2024-11-20 09:10:58.519895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:19.879 Running I/O for 4 seconds... 00:17:21.765 6101.00 IOPS, 23.83 MiB/s [2024-11-20T09:11:01.627Z] 5332.00 IOPS, 20.83 MiB/s [2024-11-20T09:11:02.571Z] 5160.67 IOPS, 20.16 MiB/s [2024-11-20T09:11:02.571Z] 5075.00 IOPS, 19.82 MiB/s 00:17:23.652 Latency(us) 00:17:23.652 [2024-11-20T09:11:02.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.652 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.652 ftl0 : 4.04 5059.76 19.76 0.00 0.00 25193.01 475.77 47387.57 00:17:23.652 [2024-11-20T09:11:02.571Z] =================================================================================================================== 00:17:23.652 [2024-11-20T09:11:02.571Z] Total : 5059.76 19.76 0.00 0.00 25193.01 0.00 47387.57 00:17:23.652 [2024-11-20 09:11:02.565418] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:23.652 { 00:17:23.652 "results": [ 00:17:23.652 { 00:17:23.652 "job": "ftl0", 00:17:23.652 "core_mask": "0x1", 00:17:23.652 "workload": "randwrite", 00:17:23.652 "status": "finished", 00:17:23.652 "queue_depth": 128, 00:17:23.652 "io_size": 4096, 00:17:23.652 "runtime": 4.037344, 00:17:23.652 "iops": 5059.762061394818, 00:17:23.652 "mibps": 19.76469555232351, 00:17:23.652 "io_failed": 0, 00:17:23.652 "io_timeout": 0, 00:17:23.652 "avg_latency_us": 25193.008777695773, 00:17:23.652 "min_latency_us": 475.7661538461538, 00:17:23.652 "max_latency_us": 47387.56923076923 00:17:23.652 } 00:17:23.652 ], 00:17:23.652 "core_count": 1 00:17:23.652 } 00:17:23.914 09:11:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:23.914 [2024-11-20 09:11:02.684183] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:23.914 Running I/O for 4 seconds... 00:17:25.803 4600.00 IOPS, 17.97 MiB/s [2024-11-20T09:11:06.110Z] 4517.50 IOPS, 17.65 MiB/s [2024-11-20T09:11:07.053Z] 4436.33 IOPS, 17.33 MiB/s [2024-11-20T09:11:07.053Z] 4454.00 IOPS, 17.40 MiB/s 00:17:28.134 Latency(us) 00:17:28.134 [2024-11-20T09:11:07.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.134 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:28.134 Verification LBA range: start 0x0 length 0x1400000 00:17:28.134 ftl0 : 4.01 4469.76 17.46 0.00 0.00 28557.42 419.05 41741.39 00:17:28.134 [2024-11-20T09:11:07.053Z] =================================================================================================================== 00:17:28.134 [2024-11-20T09:11:07.053Z] Total : 4469.76 17.46 0.00 0.00 28557.42 0.00 41741.39 00:17:28.134 [2024-11-20 09:11:06.715390] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:17:28.134 "results": [ 00:17:28.134 { 00:17:28.134 "job": "ftl0", 00:17:28.134 "core_mask": "0x1", 00:17:28.134 "workload": "verify", 00:17:28.134 "status": "finished", 00:17:28.134 "verify_range": { 00:17:28.134 "start": 0, 00:17:28.134 "length": 20971520 00:17:28.134 }, 00:17:28.134 "queue_depth": 128, 00:17:28.134 "io_size": 4096, 00:17:28.134 "runtime": 4.01453, 00:17:28.134 "iops": 4469.7635837819125, 00:17:28.134 "mibps": 17.460013999148096, 00:17:28.134 "io_failed": 0, 00:17:28.134 "io_timeout": 0, 00:17:28.134 "avg_latency_us": 28557.423692170512, 00:17:28.134 "min_latency_us": 419.0523076923077, 00:17:28.134 "max_latency_us": 41741.39076923077 00:17:28.134 } 00:17:28.134 ], 00:17:28.134 "core_count": 1 00:17:28.134 } 00:17:28.134 l0 00:17:28.134 09:11:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:28.134 [2024-11-20 09:11:06.938438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.134 [2024-11-20 09:11:06.938640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:28.134 [2024-11-20 09:11:06.938666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.134 [2024-11-20 09:11:06.938677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.134 [2024-11-20 09:11:06.938706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:28.134 [2024-11-20 09:11:06.941716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.134 [2024-11-20 09:11:06.941890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:28.134 [2024-11-20 09:11:06.941917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.988 ms 00:17:28.134 [2024-11-20 09:11:06.941925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.134 [2024-11-20 09:11:06.945094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.134 [2024-11-20 09:11:06.945309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:28.134 [2024-11-20 09:11:06.945339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:17:28.134 [2024-11-20 09:11:06.945349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.160121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.160334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:28.396 [2024-11-20 09:11:07.160426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.731 ms 00:17:28.396 [2024-11-20 09:11:07.160456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.166734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.166931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:28.396 [2024-11-20 09:11:07.167013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.218 ms 00:17:28.396 [2024-11-20 09:11:07.167037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.193779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.193979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:28.396 [2024-11-20 09:11:07.194469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.641 ms 00:17:28.396 [2024-11-20 09:11:07.194523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.212927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.213106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:28.396 [2024-11-20 09:11:07.213241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.292 ms 00:17:28.396 [2024-11-20 09:11:07.213267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.213435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.213467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:28.396 [2024-11-20 09:11:07.213495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:28.396 [2024-11-20 09:11:07.213515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.239468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.239634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:28.396 [2024-11-20 09:11:07.239701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.920 ms 00:17:28.396 [2024-11-20 09:11:07.239723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.265801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.266009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:28.396 [2024-11-20 09:11:07.266083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.766 ms 00:17:28.396 [2024-11-20 09:11:07.266108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.396 [2024-11-20 09:11:07.290913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.396 [2024-11-20 09:11:07.291082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:28.396 [2024-11-20 09:11:07.291149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.748 ms 00:17:28.396 [2024-11-20 09:11:07.291172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-20 09:11:07.316305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-20 09:11:07.316471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:28.659 [2024-11-20 09:11:07.316540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:17:28.659 [2024-11-20 09:11:07.316562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-20 09:11:07.316612] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:28.659 [2024-11-20 09:11:07.316642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.316677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.316706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.316737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.316818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.316851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.316907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:28.659 [2024-11-20 09:11:07.317979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.317990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.317998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:28.660 [2024-11-20 09:11:07.318303] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:28.660 [2024-11-20 09:11:07.318313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b38b87da-463a-467a-8d7a-1ca3be6ee8d5 00:17:28.660 [2024-11-20 09:11:07.318321] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:28.660 [2024-11-20 09:11:07.318331] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:28.660 [2024-11-20 09:11:07.318341] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:28.660 [2024-11-20 09:11:07.318351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:28.660 [2024-11-20 09:11:07.318358] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:28.660 [2024-11-20 09:11:07.318368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:28.660 [2024-11-20 09:11:07.318375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:28.660 [2024-11-20 09:11:07.318386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:28.660 [2024-11-20 09:11:07.318392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:28.660 [2024-11-20 09:11:07.318405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.660 [2024-11-20 09:11:07.318414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:28.660 [2024-11-20 09:11:07.318426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.795 ms 00:17:28.660 [2024-11-20 09:11:07.318434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.332374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.660 [2024-11-20 09:11:07.332418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:28.660 [2024-11-20 09:11:07.332432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.862 ms 00:17:28.660 [2024-11-20 09:11:07.332441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.332825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.660 [2024-11-20 09:11:07.332836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:28.660 [2024-11-20 09:11:07.332847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.358 ms 00:17:28.660 [2024-11-20 09:11:07.332855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.372130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.660 [2024-11-20 09:11:07.372176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.660 [2024-11-20 09:11:07.372193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.660 [2024-11-20 09:11:07.372201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.372269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.660 [2024-11-20 09:11:07.372278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.660 [2024-11-20 09:11:07.372289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.660 [2024-11-20 09:11:07.372297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.372377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.660 [2024-11-20 09:11:07.372391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.660 [2024-11-20 09:11:07.372402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.660 [2024-11-20 09:11:07.372410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.372427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.660 [2024-11-20 09:11:07.372436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.660 [2024-11-20 09:11:07.372446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.660 [2024-11-20 09:11:07.372454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.457691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.660 [2024-11-20 09:11:07.457762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.660 [2024-11-20 09:11:07.457780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.660 [2024-11-20 09:11:07.457789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-20 09:11:07.527970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.660 [2024-11-20 09:11:07.528028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.660 [2024-11-20 09:11:07.528044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.661 [2024-11-20 09:11:07.528176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.661 [2024-11-20 09:11:07.528192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.661 [2024-11-20 09:11:07.528255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.661 [2024-11-20 09:11:07.528266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.661 [2024-11-20 09:11:07.528384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.661 [2024-11-20 09:11:07.528401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.661 [2024-11-20 09:11:07.528453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.661 [2024-11-20 09:11:07.528463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.661 [2024-11-20 09:11:07.528526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.661 [2024-11-20 09:11:07.528536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.661 [2024-11-20 09:11:07.528617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.661 [2024-11-20 09:11:07.528628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.661 [2024-11-20 09:11:07.528637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.661 [2024-11-20 09:11:07.528781] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 590.291 ms, result 0 00:17:28.661 true 00:17:28.661 09:11:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73424 00:17:28.661 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 73424 ']' 00:17:28.661 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 73424 00:17:28.661 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:17:28.661 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.661 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73424 00:17:28.922 killing process with pid 73424 00:17:28.922 Received shutdown signal, test time was about 4.000000 seconds 00:17:28.922 00:17:28.922 Latency(us) 00:17:28.922 [2024-11-20T09:11:07.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.922 [2024-11-20T09:11:07.841Z] =================================================================================================================== 00:17:28.922 [2024-11-20T09:11:07.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.922 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.922 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.922 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73424' 00:17:28.922 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 73424 00:17:28.922 09:11:07 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 73424 00:17:34.216 Remove shared memory files 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:34.216 ************************************ 00:17:34.216 END TEST ftl_bdevperf 00:17:34.216 ************************************ 00:17:34.216 00:17:34.216 real 0m26.651s 00:17:34.216 user 0m29.128s 00:17:34.216 sys 0m1.113s 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.216 09:11:12 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:34.216 09:11:12 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:34.216 09:11:12 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:34.216 09:11:12 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.216 09:11:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:34.216 ************************************ 00:17:34.216 START TEST ftl_trim 00:17:34.216 ************************************ 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:34.216 * Looking for test storage... 00:17:34.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.216 09:11:12 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.216 --rc genhtml_branch_coverage=1 00:17:34.216 --rc genhtml_function_coverage=1 00:17:34.216 --rc genhtml_legend=1 00:17:34.216 --rc geninfo_all_blocks=1 00:17:34.216 --rc geninfo_unexecuted_blocks=1 00:17:34.216 00:17:34.216 ' 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.216 --rc genhtml_branch_coverage=1 00:17:34.216 --rc genhtml_function_coverage=1 00:17:34.216 --rc genhtml_legend=1 00:17:34.216 --rc geninfo_all_blocks=1 00:17:34.216 --rc geninfo_unexecuted_blocks=1 00:17:34.216 00:17:34.216 ' 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.216 --rc genhtml_branch_coverage=1 00:17:34.216 --rc genhtml_function_coverage=1 00:17:34.216 --rc genhtml_legend=1 00:17:34.216 --rc geninfo_all_blocks=1 00:17:34.216 --rc geninfo_unexecuted_blocks=1 00:17:34.216 00:17:34.216 ' 00:17:34.216 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:34.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.216 --rc genhtml_branch_coverage=1 00:17:34.216 --rc genhtml_function_coverage=1 00:17:34.216 --rc genhtml_legend=1 00:17:34.216 --rc geninfo_all_blocks=1 00:17:34.216 --rc geninfo_unexecuted_blocks=1 00:17:34.216 00:17:34.216 ' 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.216 09:11:12 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73781 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73781 00:17:34.217 09:11:12 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:34.217 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 73781 ']' 00:17:34.217 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.217 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.217 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.217 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.217 09:11:12 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:34.217 [2024-11-20 09:11:12.667203] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:34.217 [2024-11-20 09:11:12.667521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73781 ] 00:17:34.217 [2024-11-20 09:11:12.831968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:34.217 [2024-11-20 09:11:12.954457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.217 [2024-11-20 09:11:12.954793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:34.217 [2024-11-20 09:11:12.954905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.790 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:34.790 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:17:34.790 09:11:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:34.790 09:11:13 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:34.790 09:11:13 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:34.790 09:11:13 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:34.790 09:11:13 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:34.790 09:11:13 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:35.052 09:11:13 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:35.052 09:11:13 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:35.052 09:11:13 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:35.052 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:35.052 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:35.052 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:35.052 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:35.052 09:11:13 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:35.314 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:35.314 { 00:17:35.314 "name": "nvme0n1", 00:17:35.314 "aliases": [ 00:17:35.314 "cb1b6749-f502-4b15-bdfb-f2be9385f026" 00:17:35.314 ], 00:17:35.314 "product_name": "NVMe disk", 00:17:35.314 "block_size": 4096, 00:17:35.314 "num_blocks": 1310720, 00:17:35.314 "uuid": "cb1b6749-f502-4b15-bdfb-f2be9385f026", 00:17:35.314 "numa_id": -1, 00:17:35.314 "assigned_rate_limits": { 00:17:35.314 "rw_ios_per_sec": 0, 00:17:35.314 "rw_mbytes_per_sec": 0, 00:17:35.314 "r_mbytes_per_sec": 0, 00:17:35.314 "w_mbytes_per_sec": 0 00:17:35.314 }, 00:17:35.314 "claimed": true, 00:17:35.314 "claim_type": "read_many_write_one", 00:17:35.314 "zoned": false, 00:17:35.314 "supported_io_types": { 00:17:35.314 "read": true, 00:17:35.314 "write": true, 00:17:35.314 "unmap": true, 00:17:35.314 "flush": true, 00:17:35.314 "reset": true, 00:17:35.314 "nvme_admin": true, 00:17:35.314 "nvme_io": true, 00:17:35.314 "nvme_io_md": false, 00:17:35.314 "write_zeroes": true, 00:17:35.314 "zcopy": false, 00:17:35.314 "get_zone_info": false, 00:17:35.314 "zone_management": false, 00:17:35.314 "zone_append": false, 00:17:35.314 "compare": true, 00:17:35.314 "compare_and_write": false, 00:17:35.314 "abort": true, 00:17:35.314 "seek_hole": false, 00:17:35.314 "seek_data": false, 00:17:35.314 "copy": true, 00:17:35.314 "nvme_iov_md": false 00:17:35.314 }, 00:17:35.314 "driver_specific": { 00:17:35.314 "nvme": [ 00:17:35.314 { 00:17:35.314 "pci_address": "0000:00:11.0", 00:17:35.314 "trid": { 00:17:35.314 "trtype": "PCIe", 00:17:35.314 "traddr": "0000:00:11.0" 00:17:35.314 }, 00:17:35.314 "ctrlr_data": { 00:17:35.314 "cntlid": 0, 00:17:35.314 "vendor_id": "0x1b36", 00:17:35.314 "model_number": "QEMU NVMe Ctrl", 00:17:35.314 "serial_number": "12341", 00:17:35.314 "firmware_revision": "8.0.0", 00:17:35.314 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:35.314 "oacs": { 00:17:35.314 "security": 0, 00:17:35.314 "format": 1, 00:17:35.314 "firmware": 0, 00:17:35.314 "ns_manage": 1 00:17:35.314 }, 00:17:35.314 "multi_ctrlr": false, 00:17:35.314 "ana_reporting": false 00:17:35.314 }, 00:17:35.314 "vs": { 00:17:35.314 "nvme_version": "1.4" 00:17:35.314 }, 00:17:35.314 "ns_data": { 00:17:35.314 "id": 1, 00:17:35.314 "can_share": false 00:17:35.314 } 00:17:35.314 } 00:17:35.314 ], 00:17:35.314 "mp_policy": "active_passive" 00:17:35.314 } 00:17:35.314 } 00:17:35.314 ]' 00:17:35.314 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:35.314 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:35.314 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:35.579 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:35.579 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:35.579 09:11:14 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:35.579 09:11:14 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0dd1d7fd-6ad4-4acf-9ed7-4f2c72d35b39 00:17:35.980 09:11:14 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:36.252 09:11:14 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d10c1c75-f9bd-4ca1-8f79-5b130fc8e131 00:17:36.252 09:11:14 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d10c1c75-f9bd-4ca1-8f79-5b130fc8e131 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:36.252 09:11:15 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:36.252 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:36.252 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:36.252 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:36.252 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:36.252 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:36.514 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:36.514 { 00:17:36.514 "name": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:36.514 "aliases": [ 00:17:36.514 "lvs/nvme0n1p0" 00:17:36.514 ], 00:17:36.514 "product_name": "Logical Volume", 00:17:36.514 "block_size": 4096, 00:17:36.514 "num_blocks": 26476544, 00:17:36.514 "uuid": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:36.514 "assigned_rate_limits": { 00:17:36.514 "rw_ios_per_sec": 0, 00:17:36.514 "rw_mbytes_per_sec": 0, 00:17:36.514 "r_mbytes_per_sec": 0, 00:17:36.514 "w_mbytes_per_sec": 0 00:17:36.514 }, 00:17:36.514 "claimed": false, 00:17:36.514 "zoned": false, 00:17:36.514 "supported_io_types": { 00:17:36.514 "read": true, 00:17:36.514 "write": true, 00:17:36.514 "unmap": true, 00:17:36.514 "flush": false, 00:17:36.514 "reset": true, 00:17:36.514 "nvme_admin": false, 00:17:36.514 "nvme_io": false, 00:17:36.514 "nvme_io_md": false, 00:17:36.514 "write_zeroes": true, 00:17:36.514 "zcopy": false, 00:17:36.514 "get_zone_info": false, 00:17:36.514 "zone_management": false, 00:17:36.514 "zone_append": false, 00:17:36.514 "compare": false, 00:17:36.514 "compare_and_write": false, 00:17:36.514 "abort": false, 00:17:36.514 "seek_hole": true, 00:17:36.514 "seek_data": true, 00:17:36.514 "copy": false, 00:17:36.514 "nvme_iov_md": false 00:17:36.514 }, 00:17:36.514 "driver_specific": { 00:17:36.514 "lvol": { 00:17:36.514 "lvol_store_uuid": "d10c1c75-f9bd-4ca1-8f79-5b130fc8e131", 00:17:36.514 "base_bdev": "nvme0n1", 00:17:36.514 "thin_provision": true, 00:17:36.514 "num_allocated_clusters": 0, 00:17:36.514 "snapshot": false, 00:17:36.514 "clone": false, 00:17:36.514 "esnap_clone": false 00:17:36.514 } 00:17:36.514 } 00:17:36.514 } 00:17:36.514 ]' 00:17:36.514 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:36.514 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:36.514 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:36.775 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:36.775 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:36.775 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:36.775 09:11:15 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:36.775 09:11:15 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:36.775 09:11:15 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:37.034 09:11:15 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:37.034 09:11:15 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:37.034 09:11:15 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:37.034 { 00:17:37.034 "name": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:37.034 "aliases": [ 00:17:37.034 "lvs/nvme0n1p0" 00:17:37.034 ], 00:17:37.034 "product_name": "Logical Volume", 00:17:37.034 "block_size": 4096, 00:17:37.034 "num_blocks": 26476544, 00:17:37.034 "uuid": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:37.034 "assigned_rate_limits": { 00:17:37.034 "rw_ios_per_sec": 0, 00:17:37.034 "rw_mbytes_per_sec": 0, 00:17:37.034 "r_mbytes_per_sec": 0, 00:17:37.034 "w_mbytes_per_sec": 0 00:17:37.034 }, 00:17:37.034 "claimed": false, 00:17:37.034 "zoned": false, 00:17:37.034 "supported_io_types": { 00:17:37.034 "read": true, 00:17:37.034 "write": true, 00:17:37.034 "unmap": true, 00:17:37.034 "flush": false, 00:17:37.034 "reset": true, 00:17:37.034 "nvme_admin": false, 00:17:37.034 "nvme_io": false, 00:17:37.034 "nvme_io_md": false, 00:17:37.034 "write_zeroes": true, 00:17:37.034 "zcopy": false, 00:17:37.034 "get_zone_info": false, 00:17:37.034 "zone_management": false, 00:17:37.034 "zone_append": false, 00:17:37.034 "compare": false, 00:17:37.034 "compare_and_write": false, 00:17:37.034 "abort": false, 00:17:37.034 "seek_hole": true, 00:17:37.034 "seek_data": true, 00:17:37.034 "copy": false, 00:17:37.034 "nvme_iov_md": false 00:17:37.034 }, 00:17:37.034 "driver_specific": { 00:17:37.034 "lvol": { 00:17:37.034 "lvol_store_uuid": "d10c1c75-f9bd-4ca1-8f79-5b130fc8e131", 00:17:37.034 "base_bdev": "nvme0n1", 00:17:37.034 "thin_provision": true, 00:17:37.034 "num_allocated_clusters": 0, 00:17:37.034 "snapshot": false, 00:17:37.034 "clone": false, 00:17:37.034 "esnap_clone": false 00:17:37.034 } 00:17:37.034 } 00:17:37.034 } 00:17:37.034 ]' 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:37.034 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:37.293 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:37.293 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:37.293 09:11:15 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:37.293 09:11:15 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:37.293 09:11:15 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:37.293 09:11:16 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:37.293 09:11:16 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:37.293 09:11:16 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:37.293 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:37.293 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:37.293 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:17:37.293 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:17:37.293 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3ac869a-7dde-4de4-aded-210e436a51e9 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:37.552 { 00:17:37.552 "name": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:37.552 "aliases": [ 00:17:37.552 "lvs/nvme0n1p0" 00:17:37.552 ], 00:17:37.552 "product_name": "Logical Volume", 00:17:37.552 "block_size": 4096, 00:17:37.552 "num_blocks": 26476544, 00:17:37.552 "uuid": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:37.552 "assigned_rate_limits": { 00:17:37.552 "rw_ios_per_sec": 0, 00:17:37.552 "rw_mbytes_per_sec": 0, 00:17:37.552 "r_mbytes_per_sec": 0, 00:17:37.552 "w_mbytes_per_sec": 0 00:17:37.552 }, 00:17:37.552 "claimed": false, 00:17:37.552 "zoned": false, 00:17:37.552 "supported_io_types": { 00:17:37.552 "read": true, 00:17:37.552 "write": true, 00:17:37.552 "unmap": true, 00:17:37.552 "flush": false, 00:17:37.552 "reset": true, 00:17:37.552 "nvme_admin": false, 00:17:37.552 "nvme_io": false, 00:17:37.552 "nvme_io_md": false, 00:17:37.552 "write_zeroes": true, 00:17:37.552 "zcopy": false, 00:17:37.552 "get_zone_info": false, 00:17:37.552 "zone_management": false, 00:17:37.552 "zone_append": false, 00:17:37.552 "compare": false, 00:17:37.552 "compare_and_write": false, 00:17:37.552 "abort": false, 00:17:37.552 "seek_hole": true, 00:17:37.552 "seek_data": true, 00:17:37.552 "copy": false, 00:17:37.552 "nvme_iov_md": false 00:17:37.552 }, 00:17:37.552 "driver_specific": { 00:17:37.552 "lvol": { 00:17:37.552 "lvol_store_uuid": "d10c1c75-f9bd-4ca1-8f79-5b130fc8e131", 00:17:37.552 "base_bdev": "nvme0n1", 00:17:37.552 "thin_provision": true, 00:17:37.552 "num_allocated_clusters": 0, 00:17:37.552 "snapshot": false, 00:17:37.552 "clone": false, 00:17:37.552 "esnap_clone": false 00:17:37.552 } 00:17:37.552 } 00:17:37.552 } 00:17:37.552 ]' 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:37.552 09:11:16 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:17:37.552 09:11:16 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:37.552 09:11:16 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c3ac869a-7dde-4de4-aded-210e436a51e9 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:37.811 [2024-11-20 09:11:16.618054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.618092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:37.812 [2024-11-20 09:11:16.618106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.812 [2024-11-20 09:11:16.618112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.620387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.620419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.812 [2024-11-20 09:11:16.620428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.245 ms 00:17:37.812 [2024-11-20 09:11:16.620434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.620506] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:37.812 [2024-11-20 09:11:16.621064] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:37.812 [2024-11-20 09:11:16.621090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.621097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.812 [2024-11-20 09:11:16.621105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:17:37.812 [2024-11-20 09:11:16.621112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.621220] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:17:37.812 [2024-11-20 09:11:16.622232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.622262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:37.812 [2024-11-20 09:11:16.622270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:37.812 [2024-11-20 09:11:16.622278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.627480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.627597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.812 [2024-11-20 09:11:16.627611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.129 ms 00:17:37.812 [2024-11-20 09:11:16.627620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.627715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.627725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.812 [2024-11-20 09:11:16.627732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:37.812 [2024-11-20 09:11:16.627741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.627776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.627783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:37.812 [2024-11-20 09:11:16.627790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:37.812 [2024-11-20 09:11:16.627797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.627827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:37.812 [2024-11-20 09:11:16.630749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.630840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.812 [2024-11-20 09:11:16.630856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.924 ms 00:17:37.812 [2024-11-20 09:11:16.630863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.630923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.630930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:37.812 [2024-11-20 09:11:16.630938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:37.812 [2024-11-20 09:11:16.630955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.630982] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:37.812 [2024-11-20 09:11:16.631085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:37.812 [2024-11-20 09:11:16.631098] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:37.812 [2024-11-20 09:11:16.631106] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:37.812 [2024-11-20 09:11:16.631115] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631121] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631129] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:37.812 [2024-11-20 09:11:16.631134] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:37.812 [2024-11-20 09:11:16.631141] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:37.812 [2024-11-20 09:11:16.631148] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:37.812 [2024-11-20 09:11:16.631155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.631160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:37.812 [2024-11-20 09:11:16.631168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:37.812 [2024-11-20 09:11:16.631174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.631256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.812 [2024-11-20 09:11:16.631262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:37.812 [2024-11-20 09:11:16.631270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:37.812 [2024-11-20 09:11:16.631275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.812 [2024-11-20 09:11:16.631370] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:37.812 [2024-11-20 09:11:16.631377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:37.812 [2024-11-20 09:11:16.631385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:37.812 [2024-11-20 09:11:16.631402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:37.812 [2024-11-20 09:11:16.631421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.812 [2024-11-20 09:11:16.631432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:37.812 [2024-11-20 09:11:16.631437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:37.812 [2024-11-20 09:11:16.631444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.812 [2024-11-20 09:11:16.631449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:37.812 [2024-11-20 09:11:16.631455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:37.812 [2024-11-20 09:11:16.631460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:37.812 [2024-11-20 09:11:16.631472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:37.812 [2024-11-20 09:11:16.631492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:37.812 [2024-11-20 09:11:16.631508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:37.812 [2024-11-20 09:11:16.631527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:37.812 [2024-11-20 09:11:16.631542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:37.812 [2024-11-20 09:11:16.631560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.812 [2024-11-20 09:11:16.631572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:37.812 [2024-11-20 09:11:16.631577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:37.812 [2024-11-20 09:11:16.631582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.812 [2024-11-20 09:11:16.631587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:37.812 [2024-11-20 09:11:16.631593] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:37.812 [2024-11-20 09:11:16.631598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:37.812 [2024-11-20 09:11:16.631609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:37.812 [2024-11-20 09:11:16.631615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631619] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:37.812 [2024-11-20 09:11:16.631626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:37.812 [2024-11-20 09:11:16.631631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.812 [2024-11-20 09:11:16.631638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.812 [2024-11-20 09:11:16.631643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:37.813 [2024-11-20 09:11:16.631652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:37.813 [2024-11-20 09:11:16.631657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:37.813 [2024-11-20 09:11:16.631663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:37.813 [2024-11-20 09:11:16.631668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:37.813 [2024-11-20 09:11:16.631674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:37.813 [2024-11-20 09:11:16.631681] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:37.813 [2024-11-20 09:11:16.631689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:37.813 [2024-11-20 09:11:16.631704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:37.813 [2024-11-20 09:11:16.631710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:37.813 [2024-11-20 09:11:16.631716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:37.813 [2024-11-20 09:11:16.631722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:37.813 [2024-11-20 09:11:16.631728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:37.813 [2024-11-20 09:11:16.631734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:37.813 [2024-11-20 09:11:16.631740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:37.813 [2024-11-20 09:11:16.631746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:37.813 [2024-11-20 09:11:16.631753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:37.813 [2024-11-20 09:11:16.631783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:37.813 [2024-11-20 09:11:16.631794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:37.813 [2024-11-20 09:11:16.631807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:37.813 [2024-11-20 09:11:16.631812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:37.813 [2024-11-20 09:11:16.631820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:37.813 [2024-11-20 09:11:16.631825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.813 [2024-11-20 09:11:16.631833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:37.813 [2024-11-20 09:11:16.631838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:37.813 [2024-11-20 09:11:16.631845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.813 [2024-11-20 09:11:16.631941] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:37.813 [2024-11-20 09:11:16.631953] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:40.340 [2024-11-20 09:11:18.997449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.340 [2024-11-20 09:11:18.997545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:40.340 [2024-11-20 09:11:18.997573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2365.494 ms 00:17:40.340 [2024-11-20 09:11:18.997596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.340 [2024-11-20 09:11:19.024823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.340 [2024-11-20 09:11:19.024883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.340 [2024-11-20 09:11:19.024896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.666 ms 00:17:40.340 [2024-11-20 09:11:19.024905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.340 [2024-11-20 09:11:19.025043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.340 [2024-11-20 09:11:19.025056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.340 [2024-11-20 09:11:19.025065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:40.340 [2024-11-20 09:11:19.025076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.340 [2024-11-20 09:11:19.068940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.340 [2024-11-20 09:11:19.069007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.340 [2024-11-20 09:11:19.069028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.808 ms 00:17:40.340 [2024-11-20 09:11:19.069048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.340 [2024-11-20 09:11:19.069183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.340 [2024-11-20 09:11:19.069208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.340 [2024-11-20 09:11:19.069224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.340 [2024-11-20 09:11:19.069239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.340 [2024-11-20 09:11:19.069657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.340 [2024-11-20 09:11:19.069697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.341 [2024-11-20 09:11:19.069712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:17:40.341 [2024-11-20 09:11:19.069728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.069934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.069952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.341 [2024-11-20 09:11:19.069966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:17:40.341 [2024-11-20 09:11:19.069984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.085967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.086000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.341 [2024-11-20 09:11:19.086010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.922 ms 00:17:40.341 [2024-11-20 09:11:19.086019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.097428] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.341 [2024-11-20 09:11:19.112173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.112342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.341 [2024-11-20 09:11:19.112361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.055 ms 00:17:40.341 [2024-11-20 09:11:19.112369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.177495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.177539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:40.341 [2024-11-20 09:11:19.177553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.059 ms 00:17:40.341 [2024-11-20 09:11:19.177561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.177771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.177782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.341 [2024-11-20 09:11:19.177794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:40.341 [2024-11-20 09:11:19.177802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.201096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.201129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:40.341 [2024-11-20 09:11:19.201142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.256 ms 00:17:40.341 [2024-11-20 09:11:19.201150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.223739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.223886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:40.341 [2024-11-20 09:11:19.223907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.519 ms 00:17:40.341 [2024-11-20 09:11:19.223914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.341 [2024-11-20 09:11:19.224490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.341 [2024-11-20 09:11:19.224509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.341 [2024-11-20 09:11:19.224520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:17:40.341 [2024-11-20 09:11:19.224527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.599 [2024-11-20 09:11:19.291828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.599 [2024-11-20 09:11:19.291996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:40.599 [2024-11-20 09:11:19.292024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.258 ms 00:17:40.599 [2024-11-20 09:11:19.292032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.599 [2024-11-20 09:11:19.316400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.599 [2024-11-20 09:11:19.316433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:40.599 [2024-11-20 09:11:19.316446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.264 ms 00:17:40.599 [2024-11-20 09:11:19.316453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.599 [2024-11-20 09:11:19.339150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.599 [2024-11-20 09:11:19.339271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:40.599 [2024-11-20 09:11:19.339290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.632 ms 00:17:40.599 [2024-11-20 09:11:19.339297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.599 [2024-11-20 09:11:19.362221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.599 [2024-11-20 09:11:19.362342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.599 [2024-11-20 09:11:19.362361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.852 ms 00:17:40.600 [2024-11-20 09:11:19.362381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.600 [2024-11-20 09:11:19.362440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.600 [2024-11-20 09:11:19.362452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.600 [2024-11-20 09:11:19.362465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.600 [2024-11-20 09:11:19.362472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.600 [2024-11-20 09:11:19.362548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.600 [2024-11-20 09:11:19.362558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.600 [2024-11-20 09:11:19.362567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:40.600 [2024-11-20 09:11:19.362574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.600 [2024-11-20 09:11:19.363505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.600 [2024-11-20 09:11:19.366579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2745.151 ms, result 0 00:17:40.600 [2024-11-20 09:11:19.367434] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.600 { 00:17:40.600 "name": "ftl0", 00:17:40.600 "uuid": "b0d92c9b-576a-461a-9df9-bb3d9af603a9" 00:17:40.600 } 00:17:40.600 09:11:19 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:40.600 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:40.600 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:40.600 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:17:40.600 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:40.600 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:40.600 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:40.857 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:41.116 [ 00:17:41.116 { 00:17:41.116 "name": "ftl0", 00:17:41.116 "aliases": [ 00:17:41.116 "b0d92c9b-576a-461a-9df9-bb3d9af603a9" 00:17:41.116 ], 00:17:41.116 "product_name": "FTL disk", 00:17:41.116 "block_size": 4096, 00:17:41.116 "num_blocks": 23592960, 00:17:41.116 "uuid": "b0d92c9b-576a-461a-9df9-bb3d9af603a9", 00:17:41.116 "assigned_rate_limits": { 00:17:41.116 "rw_ios_per_sec": 0, 00:17:41.116 "rw_mbytes_per_sec": 0, 00:17:41.116 "r_mbytes_per_sec": 0, 00:17:41.116 "w_mbytes_per_sec": 0 00:17:41.116 }, 00:17:41.116 "claimed": false, 00:17:41.116 "zoned": false, 00:17:41.116 "supported_io_types": { 00:17:41.116 "read": true, 00:17:41.116 "write": true, 00:17:41.116 "unmap": true, 00:17:41.116 "flush": true, 00:17:41.116 "reset": false, 00:17:41.116 "nvme_admin": false, 00:17:41.116 "nvme_io": false, 00:17:41.116 "nvme_io_md": false, 00:17:41.116 "write_zeroes": true, 00:17:41.116 "zcopy": false, 00:17:41.116 "get_zone_info": false, 00:17:41.116 "zone_management": false, 00:17:41.116 "zone_append": false, 00:17:41.116 "compare": false, 00:17:41.116 "compare_and_write": false, 00:17:41.116 "abort": false, 00:17:41.116 "seek_hole": false, 00:17:41.116 "seek_data": false, 00:17:41.116 "copy": false, 00:17:41.116 "nvme_iov_md": false 00:17:41.116 }, 00:17:41.116 "driver_specific": { 00:17:41.116 "ftl": { 00:17:41.116 "base_bdev": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:41.116 "cache": "nvc0n1p0" 00:17:41.116 } 00:17:41.116 } 00:17:41.116 } 00:17:41.116 ] 00:17:41.116 09:11:19 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:17:41.116 09:11:19 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:41.116 09:11:19 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:41.116 09:11:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:41.116 09:11:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:41.374 09:11:20 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:41.374 { 00:17:41.374 "name": "ftl0", 00:17:41.374 "aliases": [ 00:17:41.374 "b0d92c9b-576a-461a-9df9-bb3d9af603a9" 00:17:41.374 ], 00:17:41.374 "product_name": "FTL disk", 00:17:41.374 "block_size": 4096, 00:17:41.374 "num_blocks": 23592960, 00:17:41.374 "uuid": "b0d92c9b-576a-461a-9df9-bb3d9af603a9", 00:17:41.374 "assigned_rate_limits": { 00:17:41.374 "rw_ios_per_sec": 0, 00:17:41.374 "rw_mbytes_per_sec": 0, 00:17:41.374 "r_mbytes_per_sec": 0, 00:17:41.374 "w_mbytes_per_sec": 0 00:17:41.374 }, 00:17:41.374 "claimed": false, 00:17:41.374 "zoned": false, 00:17:41.374 "supported_io_types": { 00:17:41.374 "read": true, 00:17:41.374 "write": true, 00:17:41.374 "unmap": true, 00:17:41.374 "flush": true, 00:17:41.374 "reset": false, 00:17:41.374 "nvme_admin": false, 00:17:41.374 "nvme_io": false, 00:17:41.374 "nvme_io_md": false, 00:17:41.374 "write_zeroes": true, 00:17:41.374 "zcopy": false, 00:17:41.374 "get_zone_info": false, 00:17:41.374 "zone_management": false, 00:17:41.374 "zone_append": false, 00:17:41.374 "compare": false, 00:17:41.374 "compare_and_write": false, 00:17:41.374 "abort": false, 00:17:41.374 "seek_hole": false, 00:17:41.374 "seek_data": false, 00:17:41.374 "copy": false, 00:17:41.374 "nvme_iov_md": false 00:17:41.374 }, 00:17:41.374 "driver_specific": { 00:17:41.374 "ftl": { 00:17:41.374 "base_bdev": "c3ac869a-7dde-4de4-aded-210e436a51e9", 00:17:41.374 "cache": "nvc0n1p0" 00:17:41.374 } 00:17:41.374 } 00:17:41.374 } 00:17:41.374 ]' 00:17:41.375 09:11:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:41.375 09:11:20 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:41.375 09:11:20 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:41.633 [2024-11-20 09:11:20.407449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.407489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:41.633 [2024-11-20 09:11:20.407504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:41.633 [2024-11-20 09:11:20.407515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.407553] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:41.633 [2024-11-20 09:11:20.410169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.410196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:41.633 [2024-11-20 09:11:20.410211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.599 ms 00:17:41.633 [2024-11-20 09:11:20.410219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.410842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.410862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:41.633 [2024-11-20 09:11:20.410883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:17:41.633 [2024-11-20 09:11:20.410890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.414602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.414623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:41.633 [2024-11-20 09:11:20.414635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:17:41.633 [2024-11-20 09:11:20.414643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.421647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.421773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:41.633 [2024-11-20 09:11:20.421791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.954 ms 00:17:41.633 [2024-11-20 09:11:20.421799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.444821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.444958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:41.633 [2024-11-20 09:11:20.444981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.918 ms 00:17:41.633 [2024-11-20 09:11:20.444989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.459943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.460061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:41.633 [2024-11-20 09:11:20.460081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.895 ms 00:17:41.633 [2024-11-20 09:11:20.460091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.460303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.460313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:41.633 [2024-11-20 09:11:20.460323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:41.633 [2024-11-20 09:11:20.460330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.483381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.483490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:41.633 [2024-11-20 09:11:20.483509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.021 ms 00:17:41.633 [2024-11-20 09:11:20.483516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.506260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.506361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:41.633 [2024-11-20 09:11:20.506414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.687 ms 00:17:41.633 [2024-11-20 09:11:20.506435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.633 [2024-11-20 09:11:20.528966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.633 [2024-11-20 09:11:20.529071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:41.633 [2024-11-20 09:11:20.529122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.455 ms 00:17:41.633 [2024-11-20 09:11:20.529143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.893 [2024-11-20 09:11:20.551537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.893 [2024-11-20 09:11:20.551640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:41.893 [2024-11-20 09:11:20.551691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.269 ms 00:17:41.893 [2024-11-20 09:11:20.551712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.893 [2024-11-20 09:11:20.551801] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:41.893 [2024-11-20 09:11:20.551832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.551914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.551946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:41.893 [2024-11-20 09:11:20.552388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.552981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.553997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.554986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:41.894 [2024-11-20 09:11:20.555779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.555807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.555838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.555914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.555947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.555975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.556006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.556093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.556104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.556113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.556124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:41.895 [2024-11-20 09:11:20.556139] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:41.895 [2024-11-20 09:11:20.556151] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:17:41.895 [2024-11-20 09:11:20.556159] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:41.895 [2024-11-20 09:11:20.556168] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:41.895 [2024-11-20 09:11:20.556175] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:41.895 [2024-11-20 09:11:20.556184] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:41.895 [2024-11-20 09:11:20.556193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:41.895 [2024-11-20 09:11:20.556202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:41.895 [2024-11-20 09:11:20.556209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:41.895 [2024-11-20 09:11:20.556217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:41.895 [2024-11-20 09:11:20.556223] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:41.895 [2024-11-20 09:11:20.556232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.895 [2024-11-20 09:11:20.556239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:41.895 [2024-11-20 09:11:20.556249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.433 ms 00:17:41.895 [2024-11-20 09:11:20.556256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.568972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.895 [2024-11-20 09:11:20.569074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:41.895 [2024-11-20 09:11:20.569095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.654 ms 00:17:41.895 [2024-11-20 09:11:20.569103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.569482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.895 [2024-11-20 09:11:20.569499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:41.895 [2024-11-20 09:11:20.569509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:41.895 [2024-11-20 09:11:20.569516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.613740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.613776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:41.895 [2024-11-20 09:11:20.613787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.613795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.613898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.613908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:41.895 [2024-11-20 09:11:20.613917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.613925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.613982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.613991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:41.895 [2024-11-20 09:11:20.614004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.614011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.614040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.614048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:41.895 [2024-11-20 09:11:20.614057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.614064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.696054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.696183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:41.895 [2024-11-20 09:11:20.696201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.696208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.758524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.758562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:41.895 [2024-11-20 09:11:20.758574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.758581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.758655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.758665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.895 [2024-11-20 09:11:20.758689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.758699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.758763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.758772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.895 [2024-11-20 09:11:20.758781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.758788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.759054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.759087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.895 [2024-11-20 09:11:20.759110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.759128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.759208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.759232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:41.895 [2024-11-20 09:11:20.759253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.759272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.759393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.759419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.895 [2024-11-20 09:11:20.759442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.759460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.759527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:41.895 [2024-11-20 09:11:20.759600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.895 [2024-11-20 09:11:20.759611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:41.895 [2024-11-20 09:11:20.759618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.895 [2024-11-20 09:11:20.759807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 352.335 ms, result 0 00:17:41.895 true 00:17:41.895 09:11:20 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73781 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 73781 ']' 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 73781 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73781 00:17:41.895 killing process with pid 73781 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73781' 00:17:41.895 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 73781 00:17:41.896 09:11:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 73781 00:17:48.456 09:11:26 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:48.717 65536+0 records in 00:17:48.717 65536+0 records out 00:17:48.717 268435456 bytes (268 MB, 256 MiB) copied, 1.07579 s, 250 MB/s 00:17:48.717 09:11:27 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:48.717 [2024-11-20 09:11:27.571998] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:17:48.717 [2024-11-20 09:11:27.572106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73965 ] 00:17:48.978 [2024-11-20 09:11:27.724443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.978 [2024-11-20 09:11:27.822735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.239 [2024-11-20 09:11:28.081217] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.239 [2024-11-20 09:11:28.081292] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:49.501 [2024-11-20 09:11:28.244575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.244639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:49.501 [2024-11-20 09:11:28.244655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.501 [2024-11-20 09:11:28.244665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.247795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.247849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.501 [2024-11-20 09:11:28.247861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:17:49.501 [2024-11-20 09:11:28.247888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.248006] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:49.501 [2024-11-20 09:11:28.248760] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:49.501 [2024-11-20 09:11:28.248948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.248964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.501 [2024-11-20 09:11:28.248974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.950 ms 00:17:49.501 [2024-11-20 09:11:28.248983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.250731] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:49.501 [2024-11-20 09:11:28.264814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.265015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:49.501 [2024-11-20 09:11:28.265038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.084 ms 00:17:49.501 [2024-11-20 09:11:28.265049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.265532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.265569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:49.501 [2024-11-20 09:11:28.265582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:49.501 [2024-11-20 09:11:28.265591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.270617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.270650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.501 [2024-11-20 09:11:28.270660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:17:49.501 [2024-11-20 09:11:28.270667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.270756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.270765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.501 [2024-11-20 09:11:28.270773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:49.501 [2024-11-20 09:11:28.270781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.270805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.270816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:49.501 [2024-11-20 09:11:28.270824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:49.501 [2024-11-20 09:11:28.270831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.270852] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:49.501 [2024-11-20 09:11:28.274262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.274289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.501 [2024-11-20 09:11:28.274298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.415 ms 00:17:49.501 [2024-11-20 09:11:28.274304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.274338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.274346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:49.501 [2024-11-20 09:11:28.274353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:49.501 [2024-11-20 09:11:28.274360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.274377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:49.501 [2024-11-20 09:11:28.274395] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:49.501 [2024-11-20 09:11:28.274428] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:49.501 [2024-11-20 09:11:28.274443] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:49.501 [2024-11-20 09:11:28.274544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:49.501 [2024-11-20 09:11:28.274554] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:49.501 [2024-11-20 09:11:28.274564] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:49.501 [2024-11-20 09:11:28.274574] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:49.501 [2024-11-20 09:11:28.274585] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:49.501 [2024-11-20 09:11:28.274593] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:49.501 [2024-11-20 09:11:28.274600] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:49.501 [2024-11-20 09:11:28.274606] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:49.501 [2024-11-20 09:11:28.274613] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:49.501 [2024-11-20 09:11:28.274621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.274628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:49.501 [2024-11-20 09:11:28.274635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:17:49.501 [2024-11-20 09:11:28.274641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.274728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.501 [2024-11-20 09:11:28.274737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:49.501 [2024-11-20 09:11:28.274746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:49.501 [2024-11-20 09:11:28.274753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.501 [2024-11-20 09:11:28.274865] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:49.501 [2024-11-20 09:11:28.274892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:49.501 [2024-11-20 09:11:28.274901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.501 [2024-11-20 09:11:28.274909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.501 [2024-11-20 09:11:28.274917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:49.501 [2024-11-20 09:11:28.274923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:49.501 [2024-11-20 09:11:28.274930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:49.501 [2024-11-20 09:11:28.274937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:49.501 [2024-11-20 09:11:28.274944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:49.501 [2024-11-20 09:11:28.274951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.501 [2024-11-20 09:11:28.274957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:49.501 [2024-11-20 09:11:28.274963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:49.501 [2024-11-20 09:11:28.274969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:49.501 [2024-11-20 09:11:28.274981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:49.502 [2024-11-20 09:11:28.274989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:49.502 [2024-11-20 09:11:28.274996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:49.502 [2024-11-20 09:11:28.275009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:49.502 [2024-11-20 09:11:28.275028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:49.502 [2024-11-20 09:11:28.275048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:49.502 [2024-11-20 09:11:28.275067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:49.502 [2024-11-20 09:11:28.275086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:49.502 [2024-11-20 09:11:28.275106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.502 [2024-11-20 09:11:28.275121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:49.502 [2024-11-20 09:11:28.275127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:49.502 [2024-11-20 09:11:28.275133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:49.502 [2024-11-20 09:11:28.275140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:49.502 [2024-11-20 09:11:28.275147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:49.502 [2024-11-20 09:11:28.275153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:49.502 [2024-11-20 09:11:28.275165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:49.502 [2024-11-20 09:11:28.275171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275177] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:49.502 [2024-11-20 09:11:28.275185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:49.502 [2024-11-20 09:11:28.275192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:49.502 [2024-11-20 09:11:28.275209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:49.502 [2024-11-20 09:11:28.275215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:49.502 [2024-11-20 09:11:28.275222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:49.502 [2024-11-20 09:11:28.275229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:49.502 [2024-11-20 09:11:28.275235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:49.502 [2024-11-20 09:11:28.275241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:49.502 [2024-11-20 09:11:28.275250] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:49.502 [2024-11-20 09:11:28.275258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:49.502 [2024-11-20 09:11:28.275273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:49.502 [2024-11-20 09:11:28.275279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:49.502 [2024-11-20 09:11:28.275287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:49.502 [2024-11-20 09:11:28.275294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:49.502 [2024-11-20 09:11:28.275301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:49.502 [2024-11-20 09:11:28.275307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:49.502 [2024-11-20 09:11:28.275314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:49.502 [2024-11-20 09:11:28.275321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:49.502 [2024-11-20 09:11:28.275327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:49.502 [2024-11-20 09:11:28.275363] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:49.502 [2024-11-20 09:11:28.275370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:49.502 [2024-11-20 09:11:28.275384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:49.502 [2024-11-20 09:11:28.275392] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:49.502 [2024-11-20 09:11:28.275399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:49.502 [2024-11-20 09:11:28.275406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.275413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:49.502 [2024-11-20 09:11:28.275422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:17:49.502 [2024-11-20 09:11:28.275429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.301301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.301331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.502 [2024-11-20 09:11:28.301341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.808 ms 00:17:49.502 [2024-11-20 09:11:28.301348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.301461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.301474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:49.502 [2024-11-20 09:11:28.301481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:49.502 [2024-11-20 09:11:28.301489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.351610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.351650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.502 [2024-11-20 09:11:28.351661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.101 ms 00:17:49.502 [2024-11-20 09:11:28.351672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.351758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.351769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.502 [2024-11-20 09:11:28.351778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:49.502 [2024-11-20 09:11:28.351785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.352132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.352158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.502 [2024-11-20 09:11:28.352167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:17:49.502 [2024-11-20 09:11:28.352181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.352310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.352324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.502 [2024-11-20 09:11:28.352332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:17:49.502 [2024-11-20 09:11:28.352340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.365864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.365905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.502 [2024-11-20 09:11:28.365915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.504 ms 00:17:49.502 [2024-11-20 09:11:28.365922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.378536] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:49.502 [2024-11-20 09:11:28.378568] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:49.502 [2024-11-20 09:11:28.378579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.502 [2024-11-20 09:11:28.378587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:49.502 [2024-11-20 09:11:28.378595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.568 ms 00:17:49.502 [2024-11-20 09:11:28.378602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.502 [2024-11-20 09:11:28.402562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.503 [2024-11-20 09:11:28.402606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:49.503 [2024-11-20 09:11:28.402623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.891 ms 00:17:49.503 [2024-11-20 09:11:28.402630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.503 [2024-11-20 09:11:28.414240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.503 [2024-11-20 09:11:28.414269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:49.503 [2024-11-20 09:11:28.414278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.543 ms 00:17:49.503 [2024-11-20 09:11:28.414285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.764 [2024-11-20 09:11:28.425864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.764 [2024-11-20 09:11:28.425901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:49.764 [2024-11-20 09:11:28.425910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.517 ms 00:17:49.764 [2024-11-20 09:11:28.425916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.764 [2024-11-20 09:11:28.426520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.764 [2024-11-20 09:11:28.426544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:49.764 [2024-11-20 09:11:28.426553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:17:49.764 [2024-11-20 09:11:28.426560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.764 [2024-11-20 09:11:28.482700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.764 [2024-11-20 09:11:28.482742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:49.764 [2024-11-20 09:11:28.482754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.117 ms 00:17:49.765 [2024-11-20 09:11:28.482762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.493217] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:49.765 [2024-11-20 09:11:28.507212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.507245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:49.765 [2024-11-20 09:11:28.507256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.360 ms 00:17:49.765 [2024-11-20 09:11:28.507264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.507343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.507353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:49.765 [2024-11-20 09:11:28.507363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:49.765 [2024-11-20 09:11:28.507370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.507415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.507423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:49.765 [2024-11-20 09:11:28.507431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:49.765 [2024-11-20 09:11:28.507439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.507465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.507476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:49.765 [2024-11-20 09:11:28.507484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:49.765 [2024-11-20 09:11:28.507491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.507519] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:49.765 [2024-11-20 09:11:28.507528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.507536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:49.765 [2024-11-20 09:11:28.507543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:49.765 [2024-11-20 09:11:28.507550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.531358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.531478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:49.765 [2024-11-20 09:11:28.531494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.787 ms 00:17:49.765 [2024-11-20 09:11:28.531503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.531588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.765 [2024-11-20 09:11:28.531599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:49.765 [2024-11-20 09:11:28.531607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:49.765 [2024-11-20 09:11:28.531615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.765 [2024-11-20 09:11:28.532488] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.765 [2024-11-20 09:11:28.535468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.542 ms, result 0 00:17:49.765 [2024-11-20 09:11:28.536299] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.765 [2024-11-20 09:11:28.549238] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:50.706  [2024-11-20T09:11:30.566Z] Copying: 15/256 [MB] (15 MBps) [2024-11-20T09:11:31.952Z] Copying: 27/256 [MB] (12 MBps) [2024-11-20T09:11:32.894Z] Copying: 43/256 [MB] (15 MBps) [2024-11-20T09:11:33.839Z] Copying: 58/256 [MB] (14 MBps) [2024-11-20T09:11:34.783Z] Copying: 73/256 [MB] (15 MBps) [2024-11-20T09:11:35.726Z] Copying: 83/256 [MB] (10 MBps) [2024-11-20T09:11:36.670Z] Copying: 95320/262144 [kB] (10144 kBps) [2024-11-20T09:11:37.615Z] Copying: 106/256 [MB] (13 MBps) [2024-11-20T09:11:38.558Z] Copying: 116/256 [MB] (10 MBps) [2024-11-20T09:11:39.939Z] Copying: 126/256 [MB] (10 MBps) [2024-11-20T09:11:40.874Z] Copying: 137/256 [MB] (10 MBps) [2024-11-20T09:11:41.814Z] Copying: 167/256 [MB] (29 MBps) [2024-11-20T09:11:42.756Z] Copying: 201/256 [MB] (34 MBps) [2024-11-20T09:11:43.699Z] Copying: 212/256 [MB] (10 MBps) [2024-11-20T09:11:44.644Z] Copying: 231/256 [MB] (18 MBps) [2024-11-20T09:11:44.644Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-20 09:11:44.436113] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.725 [2024-11-20 09:11:44.443754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.443896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:05.725 [2024-11-20 09:11:44.443914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:05.725 [2024-11-20 09:11:44.443921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.443946] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:05.725 [2024-11-20 09:11:44.446190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.446217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:05.725 [2024-11-20 09:11:44.446226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.233 ms 00:18:05.725 [2024-11-20 09:11:44.446233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.448147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.448172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:05.725 [2024-11-20 09:11:44.448180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.896 ms 00:18:05.725 [2024-11-20 09:11:44.448186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.454327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.454357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:05.725 [2024-11-20 09:11:44.454365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.128 ms 00:18:05.725 [2024-11-20 09:11:44.454371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.459630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.459734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:05.725 [2024-11-20 09:11:44.459748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.224 ms 00:18:05.725 [2024-11-20 09:11:44.459755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.478170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.478272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:05.725 [2024-11-20 09:11:44.478284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.379 ms 00:18:05.725 [2024-11-20 09:11:44.478290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.490681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.490712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:05.725 [2024-11-20 09:11:44.490722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.366 ms 00:18:05.725 [2024-11-20 09:11:44.490730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.725 [2024-11-20 09:11:44.490837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.725 [2024-11-20 09:11:44.490844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:05.725 [2024-11-20 09:11:44.490852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:05.726 [2024-11-20 09:11:44.490858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.726 [2024-11-20 09:11:44.509124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.726 [2024-11-20 09:11:44.509224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:05.726 [2024-11-20 09:11:44.509236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.254 ms 00:18:05.726 [2024-11-20 09:11:44.509241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.726 [2024-11-20 09:11:44.527791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.726 [2024-11-20 09:11:44.527816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:05.726 [2024-11-20 09:11:44.527823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.517 ms 00:18:05.726 [2024-11-20 09:11:44.527829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.726 [2024-11-20 09:11:44.545462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.726 [2024-11-20 09:11:44.545486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:05.726 [2024-11-20 09:11:44.545493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.606 ms 00:18:05.726 [2024-11-20 09:11:44.545499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.726 [2024-11-20 09:11:44.563089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.726 [2024-11-20 09:11:44.563113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:05.726 [2024-11-20 09:11:44.563120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:18:05.726 [2024-11-20 09:11:44.563126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.726 [2024-11-20 09:11:44.563153] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:05.726 [2024-11-20 09:11:44.563165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:05.726 [2024-11-20 09:11:44.563598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:05.727 [2024-11-20 09:11:44.563763] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:05.727 [2024-11-20 09:11:44.563770] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:18:05.727 [2024-11-20 09:11:44.563777] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:05.727 [2024-11-20 09:11:44.563783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:05.727 [2024-11-20 09:11:44.563789] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:05.727 [2024-11-20 09:11:44.563795] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:05.727 [2024-11-20 09:11:44.563800] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:05.727 [2024-11-20 09:11:44.563806] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:05.727 [2024-11-20 09:11:44.563812] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:05.727 [2024-11-20 09:11:44.563817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:05.727 [2024-11-20 09:11:44.563822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:05.727 [2024-11-20 09:11:44.563827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.727 [2024-11-20 09:11:44.563834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:05.727 [2024-11-20 09:11:44.563841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:18:05.727 [2024-11-20 09:11:44.563846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.727 [2024-11-20 09:11:44.573921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.727 [2024-11-20 09:11:44.573946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:05.727 [2024-11-20 09:11:44.573955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.061 ms 00:18:05.727 [2024-11-20 09:11:44.573961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.727 [2024-11-20 09:11:44.574268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.727 [2024-11-20 09:11:44.574277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:05.727 [2024-11-20 09:11:44.574284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:05.727 [2024-11-20 09:11:44.574290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.727 [2024-11-20 09:11:44.603556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.727 [2024-11-20 09:11:44.603582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.727 [2024-11-20 09:11:44.603590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.727 [2024-11-20 09:11:44.603597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.727 [2024-11-20 09:11:44.603674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.727 [2024-11-20 09:11:44.603681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.727 [2024-11-20 09:11:44.603687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.727 [2024-11-20 09:11:44.603693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.727 [2024-11-20 09:11:44.603730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.727 [2024-11-20 09:11:44.603738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.727 [2024-11-20 09:11:44.603744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.727 [2024-11-20 09:11:44.603750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.727 [2024-11-20 09:11:44.603763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.727 [2024-11-20 09:11:44.603772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.727 [2024-11-20 09:11:44.603777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.727 [2024-11-20 09:11:44.603783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.667499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.667532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.989 [2024-11-20 09:11:44.667542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.667549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.989 [2024-11-20 09:11:44.719416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.989 [2024-11-20 09:11:44.719503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.989 [2024-11-20 09:11:44.719548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.989 [2024-11-20 09:11:44.719647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:05.989 [2024-11-20 09:11:44.719694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.989 [2024-11-20 09:11:44.719751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:05.989 [2024-11-20 09:11:44.719804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.989 [2024-11-20 09:11:44.719814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:05.989 [2024-11-20 09:11:44.719820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.989 [2024-11-20 09:11:44.719960] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.190 ms, result 0 00:18:06.992 00:18:06.992 00:18:06.992 09:11:45 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74156 00:18:06.992 09:11:45 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74156 00:18:06.992 09:11:45 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74156 ']' 00:18:06.992 09:11:45 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:06.992 09:11:45 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.992 09:11:45 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.992 09:11:45 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.992 09:11:45 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.992 09:11:45 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:06.992 [2024-11-20 09:11:45.689946] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:06.992 [2024-11-20 09:11:45.690079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74156 ] 00:18:06.992 [2024-11-20 09:11:45.847266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.253 [2024-11-20 09:11:45.956754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.824 09:11:46 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:07.824 09:11:46 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:07.824 09:11:46 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:07.824 [2024-11-20 09:11:46.713519] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:07.824 [2024-11-20 09:11:46.713570] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:08.086 [2024-11-20 09:11:46.867089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.867261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.086 [2024-11-20 09:11:46.867282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.086 [2024-11-20 09:11:46.867289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.869528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.869559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.086 [2024-11-20 09:11:46.869568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:18:08.086 [2024-11-20 09:11:46.869575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.869643] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.086 [2024-11-20 09:11:46.870456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.086 [2024-11-20 09:11:46.870584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.870595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.086 [2024-11-20 09:11:46.870604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:18:08.086 [2024-11-20 09:11:46.870611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.872054] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:08.086 [2024-11-20 09:11:46.882424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.882528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:08.086 [2024-11-20 09:11:46.882576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.374 ms 00:18:08.086 [2024-11-20 09:11:46.882597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.882887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.882911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:08.086 [2024-11-20 09:11:46.882920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:08.086 [2024-11-20 09:11:46.882928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.889308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.889492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.086 [2024-11-20 09:11:46.889505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.340 ms 00:18:08.086 [2024-11-20 09:11:46.889512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.889605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.889616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.086 [2024-11-20 09:11:46.889624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:08.086 [2024-11-20 09:11:46.889634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.889657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.889665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.086 [2024-11-20 09:11:46.889671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.086 [2024-11-20 09:11:46.889678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.889697] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:08.086 [2024-11-20 09:11:46.892681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.892782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.086 [2024-11-20 09:11:46.892798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.988 ms 00:18:08.086 [2024-11-20 09:11:46.892805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.892843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.892850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.086 [2024-11-20 09:11:46.892858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:08.086 [2024-11-20 09:11:46.892865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.892898] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:08.086 [2024-11-20 09:11:46.892916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:08.086 [2024-11-20 09:11:46.892950] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:08.086 [2024-11-20 09:11:46.892962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:08.086 [2024-11-20 09:11:46.893062] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:08.086 [2024-11-20 09:11:46.893072] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.086 [2024-11-20 09:11:46.893086] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:08.086 [2024-11-20 09:11:46.893096] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893105] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893111] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:08.086 [2024-11-20 09:11:46.893118] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.086 [2024-11-20 09:11:46.893125] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:08.086 [2024-11-20 09:11:46.893134] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:08.086 [2024-11-20 09:11:46.893141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.893148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.086 [2024-11-20 09:11:46.893154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:08.086 [2024-11-20 09:11:46.893162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.893230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.086 [2024-11-20 09:11:46.893238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.086 [2024-11-20 09:11:46.893244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:08.086 [2024-11-20 09:11:46.893250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.086 [2024-11-20 09:11:46.893329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.086 [2024-11-20 09:11:46.893339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.086 [2024-11-20 09:11:46.893345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.086 [2024-11-20 09:11:46.893366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.086 [2024-11-20 09:11:46.893388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.086 [2024-11-20 09:11:46.893399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.086 [2024-11-20 09:11:46.893406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:08.086 [2024-11-20 09:11:46.893411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.086 [2024-11-20 09:11:46.893417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.086 [2024-11-20 09:11:46.893423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:08.086 [2024-11-20 09:11:46.893431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.086 [2024-11-20 09:11:46.893443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.086 [2024-11-20 09:11:46.893465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.086 [2024-11-20 09:11:46.893485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.086 [2024-11-20 09:11:46.893501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.086 [2024-11-20 09:11:46.893520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:08.086 [2024-11-20 09:11:46.893525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.086 [2024-11-20 09:11:46.893531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.086 [2024-11-20 09:11:46.893536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:08.087 [2024-11-20 09:11:46.893543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.087 [2024-11-20 09:11:46.893549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.087 [2024-11-20 09:11:46.893556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:08.087 [2024-11-20 09:11:46.893560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.087 [2024-11-20 09:11:46.893567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:08.087 [2024-11-20 09:11:46.893571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:08.087 [2024-11-20 09:11:46.893580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.087 [2024-11-20 09:11:46.893585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:08.087 [2024-11-20 09:11:46.893592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:08.087 [2024-11-20 09:11:46.893597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.087 [2024-11-20 09:11:46.893603] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.087 [2024-11-20 09:11:46.893611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.087 [2024-11-20 09:11:46.893618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.087 [2024-11-20 09:11:46.893627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.087 [2024-11-20 09:11:46.893635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.087 [2024-11-20 09:11:46.893640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.087 [2024-11-20 09:11:46.893646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.087 [2024-11-20 09:11:46.893652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.087 [2024-11-20 09:11:46.893658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.087 [2024-11-20 09:11:46.893664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.087 [2024-11-20 09:11:46.893672] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.087 [2024-11-20 09:11:46.893679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:08.087 [2024-11-20 09:11:46.893694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:08.087 [2024-11-20 09:11:46.893702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:08.087 [2024-11-20 09:11:46.893708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:08.087 [2024-11-20 09:11:46.893715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:08.087 [2024-11-20 09:11:46.893721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:08.087 [2024-11-20 09:11:46.893727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:08.087 [2024-11-20 09:11:46.893732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:08.087 [2024-11-20 09:11:46.893739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:08.087 [2024-11-20 09:11:46.893744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:08.087 [2024-11-20 09:11:46.893775] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.087 [2024-11-20 09:11:46.893781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.087 [2024-11-20 09:11:46.893796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.087 [2024-11-20 09:11:46.893804] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.087 [2024-11-20 09:11:46.893809] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.087 [2024-11-20 09:11:46.893816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.893822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.087 [2024-11-20 09:11:46.893829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:18:08.087 [2024-11-20 09:11:46.893837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.918339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.918368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.087 [2024-11-20 09:11:46.918378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.432 ms 00:18:08.087 [2024-11-20 09:11:46.918386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.918481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.918490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:08.087 [2024-11-20 09:11:46.918499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:08.087 [2024-11-20 09:11:46.918505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.944923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.944951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.087 [2024-11-20 09:11:46.944960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.399 ms 00:18:08.087 [2024-11-20 09:11:46.944966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.945021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.945028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.087 [2024-11-20 09:11:46.945037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:08.087 [2024-11-20 09:11:46.945043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.945424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.945437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.087 [2024-11-20 09:11:46.945448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:08.087 [2024-11-20 09:11:46.945454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.945569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.945577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.087 [2024-11-20 09:11:46.945585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:08.087 [2024-11-20 09:11:46.945590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.959219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.959324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.087 [2024-11-20 09:11:46.959339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.609 ms 00:18:08.087 [2024-11-20 09:11:46.959345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.970122] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:08.087 [2024-11-20 09:11:46.970226] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:08.087 [2024-11-20 09:11:46.970245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.970253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:08.087 [2024-11-20 09:11:46.970262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.820 ms 00:18:08.087 [2024-11-20 09:11:46.970267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.989333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.989430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:08.087 [2024-11-20 09:11:46.989446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.008 ms 00:18:08.087 [2024-11-20 09:11:46.989453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.087 [2024-11-20 09:11:46.998781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.087 [2024-11-20 09:11:46.998806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:08.087 [2024-11-20 09:11:46.998817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.271 ms 00:18:08.087 [2024-11-20 09:11:46.998823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.007876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.007979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:08.348 [2024-11-20 09:11:47.007994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.001 ms 00:18:08.348 [2024-11-20 09:11:47.008000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.008466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.008477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.348 [2024-11-20 09:11:47.008486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:18:08.348 [2024-11-20 09:11:47.008492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.067386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.067524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:08.348 [2024-11-20 09:11:47.067544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.872 ms 00:18:08.348 [2024-11-20 09:11:47.067552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.075668] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:08.348 [2024-11-20 09:11:47.090360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.090493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:08.348 [2024-11-20 09:11:47.090508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.731 ms 00:18:08.348 [2024-11-20 09:11:47.090516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.090583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.090593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:08.348 [2024-11-20 09:11:47.090600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:08.348 [2024-11-20 09:11:47.090607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.090650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.090659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.348 [2024-11-20 09:11:47.090666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:08.348 [2024-11-20 09:11:47.090676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.090696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.090705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.348 [2024-11-20 09:11:47.090712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:08.348 [2024-11-20 09:11:47.090721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.348 [2024-11-20 09:11:47.090750] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:08.348 [2024-11-20 09:11:47.090761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.348 [2024-11-20 09:11:47.090770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:08.349 [2024-11-20 09:11:47.090777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.349 [2024-11-20 09:11:47.090783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.349 [2024-11-20 09:11:47.110092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.349 [2024-11-20 09:11:47.110122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:08.349 [2024-11-20 09:11:47.110133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.286 ms 00:18:08.349 [2024-11-20 09:11:47.110140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.349 [2024-11-20 09:11:47.110218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.349 [2024-11-20 09:11:47.110226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:08.349 [2024-11-20 09:11:47.110237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:08.349 [2024-11-20 09:11:47.110244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.349 [2024-11-20 09:11:47.111124] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.349 [2024-11-20 09:11:47.113452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.773 ms, result 0 00:18:08.349 [2024-11-20 09:11:47.115346] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.349 Some configs were skipped because the RPC state that can call them passed over. 00:18:08.349 09:11:47 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:08.609 [2024-11-20 09:11:47.344008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.609 [2024-11-20 09:11:47.344121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:08.609 [2024-11-20 09:11:47.344136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:18:08.610 [2024-11-20 09:11:47.344146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.610 [2024-11-20 09:11:47.344174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.937 ms, result 0 00:18:08.610 true 00:18:08.610 09:11:47 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:08.871 [2024-11-20 09:11:47.544656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.871 [2024-11-20 09:11:47.544687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:08.871 [2024-11-20 09:11:47.544697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:18:08.871 [2024-11-20 09:11:47.544703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.871 [2024-11-20 09:11:47.544731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.303 ms, result 0 00:18:08.871 true 00:18:08.871 09:11:47 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74156 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74156 ']' 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74156 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74156 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74156' 00:18:08.871 killing process with pid 74156 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74156 00:18:08.871 09:11:47 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74156 00:18:09.444 [2024-11-20 09:11:48.160219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.160396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.444 [2024-11-20 09:11:48.160452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.444 [2024-11-20 09:11:48.160474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.160527] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:09.444 [2024-11-20 09:11:48.162802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.162912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.444 [2024-11-20 09:11:48.162968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:18:09.444 [2024-11-20 09:11:48.162988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.163237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.163591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.444 [2024-11-20 09:11:48.163665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:09.444 [2024-11-20 09:11:48.163686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.167001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.167098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.444 [2024-11-20 09:11:48.167153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:18:09.444 [2024-11-20 09:11:48.167171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.172437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.172461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.444 [2024-11-20 09:11:48.172470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.231 ms 00:18:09.444 [2024-11-20 09:11:48.172476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.180850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.180888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.444 [2024-11-20 09:11:48.180900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.325 ms 00:18:09.444 [2024-11-20 09:11:48.180912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.188164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.188192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.444 [2024-11-20 09:11:48.188202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.219 ms 00:18:09.444 [2024-11-20 09:11:48.188209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.188328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.188337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.444 [2024-11-20 09:11:48.188346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:09.444 [2024-11-20 09:11:48.188352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.197253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.197277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.444 [2024-11-20 09:11:48.197285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.883 ms 00:18:09.444 [2024-11-20 09:11:48.197291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.205413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.205436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.444 [2024-11-20 09:11:48.205447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.090 ms 00:18:09.444 [2024-11-20 09:11:48.205453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.213158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.213248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.444 [2024-11-20 09:11:48.213264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.674 ms 00:18:09.444 [2024-11-20 09:11:48.213270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.221039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.444 [2024-11-20 09:11:48.221064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.444 [2024-11-20 09:11:48.221073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.718 ms 00:18:09.444 [2024-11-20 09:11:48.221079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.444 [2024-11-20 09:11:48.221119] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.444 [2024-11-20 09:11:48.221132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.444 [2024-11-20 09:11:48.221286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.445 [2024-11-20 09:11:48.221807] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.445 [2024-11-20 09:11:48.221817] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:18:09.445 [2024-11-20 09:11:48.221831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.445 [2024-11-20 09:11:48.221838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.445 [2024-11-20 09:11:48.221844] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.446 [2024-11-20 09:11:48.221851] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.446 [2024-11-20 09:11:48.221857] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.446 [2024-11-20 09:11:48.221864] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.446 [2024-11-20 09:11:48.221882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.446 [2024-11-20 09:11:48.221889] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.446 [2024-11-20 09:11:48.221894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.446 [2024-11-20 09:11:48.221901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.446 [2024-11-20 09:11:48.221907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.446 [2024-11-20 09:11:48.221915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:18:09.446 [2024-11-20 09:11:48.221924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.232365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.446 [2024-11-20 09:11:48.232450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.446 [2024-11-20 09:11:48.232466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.423 ms 00:18:09.446 [2024-11-20 09:11:48.232473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.232785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.446 [2024-11-20 09:11:48.232795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.446 [2024-11-20 09:11:48.232805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:09.446 [2024-11-20 09:11:48.232811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.269840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.446 [2024-11-20 09:11:48.269868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.446 [2024-11-20 09:11:48.269889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.446 [2024-11-20 09:11:48.269896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.269980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.446 [2024-11-20 09:11:48.269988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.446 [2024-11-20 09:11:48.269999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.446 [2024-11-20 09:11:48.270004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.270044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.446 [2024-11-20 09:11:48.270051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.446 [2024-11-20 09:11:48.270061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.446 [2024-11-20 09:11:48.270068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.270084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.446 [2024-11-20 09:11:48.270092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.446 [2024-11-20 09:11:48.270099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.446 [2024-11-20 09:11:48.270107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.446 [2024-11-20 09:11:48.332380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.446 [2024-11-20 09:11:48.332414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.446 [2024-11-20 09:11:48.332426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.446 [2024-11-20 09:11:48.332434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.382982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.706 [2024-11-20 09:11:48.383165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.706 [2024-11-20 09:11:48.383270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.706 [2024-11-20 09:11:48.383320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.706 [2024-11-20 09:11:48.383429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.706 [2024-11-20 09:11:48.383480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.706 [2024-11-20 09:11:48.383544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.706 [2024-11-20 09:11:48.383603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.706 [2024-11-20 09:11:48.383611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.706 [2024-11-20 09:11:48.383617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.706 [2024-11-20 09:11:48.383748] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 223.506 ms, result 0 00:18:10.277 09:11:48 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:10.277 09:11:48 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:10.277 [2024-11-20 09:11:48.995779] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:10.277 [2024-11-20 09:11:48.995903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74203 ] 00:18:10.277 [2024-11-20 09:11:49.146661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.537 [2024-11-20 09:11:49.238067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.799 [2024-11-20 09:11:49.466445] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.799 [2024-11-20 09:11:49.466500] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:10.799 [2024-11-20 09:11:49.620187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.620225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.799 [2024-11-20 09:11:49.620236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.799 [2024-11-20 09:11:49.620243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.622474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.622506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.799 [2024-11-20 09:11:49.622514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:18:10.799 [2024-11-20 09:11:49.622520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.622585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.799 [2024-11-20 09:11:49.623303] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.799 [2024-11-20 09:11:49.623677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.623727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.799 [2024-11-20 09:11:49.623757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:18:10.799 [2024-11-20 09:11:49.623780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.626413] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:10.799 [2024-11-20 09:11:49.643536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.643570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:10.799 [2024-11-20 09:11:49.643581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.129 ms 00:18:10.799 [2024-11-20 09:11:49.643589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.643660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.643671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:10.799 [2024-11-20 09:11:49.643680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:10.799 [2024-11-20 09:11:49.643688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.650275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.650303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.799 [2024-11-20 09:11:49.650313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.544 ms 00:18:10.799 [2024-11-20 09:11:49.650321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.650421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.650431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.799 [2024-11-20 09:11:49.650440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:10.799 [2024-11-20 09:11:49.650447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.650472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.650484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.799 [2024-11-20 09:11:49.650493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:10.799 [2024-11-20 09:11:49.650500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.650522] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:10.799 [2024-11-20 09:11:49.654091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.654119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.799 [2024-11-20 09:11:49.654128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.576 ms 00:18:10.799 [2024-11-20 09:11:49.654136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.654175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.654183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.799 [2024-11-20 09:11:49.654191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:10.799 [2024-11-20 09:11:49.654199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.654217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:10.799 [2024-11-20 09:11:49.654239] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:10.799 [2024-11-20 09:11:49.654276] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:10.799 [2024-11-20 09:11:49.654292] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:10.799 [2024-11-20 09:11:49.654396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.799 [2024-11-20 09:11:49.654407] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.799 [2024-11-20 09:11:49.654419] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.799 [2024-11-20 09:11:49.654430] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654441] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:10.799 [2024-11-20 09:11:49.654457] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.799 [2024-11-20 09:11:49.654464] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.799 [2024-11-20 09:11:49.654474] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.799 [2024-11-20 09:11:49.654486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.654493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.799 [2024-11-20 09:11:49.654501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:10.799 [2024-11-20 09:11:49.654508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.654606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.799 [2024-11-20 09:11:49.654616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.799 [2024-11-20 09:11:49.654626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:10.799 [2024-11-20 09:11:49.654634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.799 [2024-11-20 09:11:49.654736] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.799 [2024-11-20 09:11:49.654748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.799 [2024-11-20 09:11:49.654761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.799 [2024-11-20 09:11:49.654784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.799 [2024-11-20 09:11:49.654807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.799 [2024-11-20 09:11:49.654820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.799 [2024-11-20 09:11:49.654827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:10.799 [2024-11-20 09:11:49.654833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.799 [2024-11-20 09:11:49.654846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.799 [2024-11-20 09:11:49.654853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:10.799 [2024-11-20 09:11:49.654859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.799 [2024-11-20 09:11:49.654890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.799 [2024-11-20 09:11:49.654911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.799 [2024-11-20 09:11:49.654932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.799 [2024-11-20 09:11:49.654954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:10.799 [2024-11-20 09:11:49.654961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.799 [2024-11-20 09:11:49.654968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.800 [2024-11-20 09:11:49.654974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:10.800 [2024-11-20 09:11:49.654982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.800 [2024-11-20 09:11:49.654989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.800 [2024-11-20 09:11:49.654996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:10.800 [2024-11-20 09:11:49.655005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.800 [2024-11-20 09:11:49.655012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.800 [2024-11-20 09:11:49.655019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:10.800 [2024-11-20 09:11:49.655025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.800 [2024-11-20 09:11:49.655033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.800 [2024-11-20 09:11:49.655039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:10.800 [2024-11-20 09:11:49.655046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.800 [2024-11-20 09:11:49.655052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.800 [2024-11-20 09:11:49.655059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:10.800 [2024-11-20 09:11:49.655065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.800 [2024-11-20 09:11:49.655071] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.800 [2024-11-20 09:11:49.655080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.800 [2024-11-20 09:11:49.655088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.800 [2024-11-20 09:11:49.655098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.800 [2024-11-20 09:11:49.655105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.800 [2024-11-20 09:11:49.655114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.800 [2024-11-20 09:11:49.655121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.800 [2024-11-20 09:11:49.655128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.800 [2024-11-20 09:11:49.655134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.800 [2024-11-20 09:11:49.655141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.800 [2024-11-20 09:11:49.655150] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.800 [2024-11-20 09:11:49.655159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:10.800 [2024-11-20 09:11:49.655174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:10.800 [2024-11-20 09:11:49.655181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:10.800 [2024-11-20 09:11:49.655188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:10.800 [2024-11-20 09:11:49.655196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:10.800 [2024-11-20 09:11:49.655202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:10.800 [2024-11-20 09:11:49.655209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:10.800 [2024-11-20 09:11:49.655216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:10.800 [2024-11-20 09:11:49.655223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:10.800 [2024-11-20 09:11:49.655230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:10.800 [2024-11-20 09:11:49.655266] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.800 [2024-11-20 09:11:49.655274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655282] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.800 [2024-11-20 09:11:49.655289] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.800 [2024-11-20 09:11:49.655297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.800 [2024-11-20 09:11:49.655305] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.800 [2024-11-20 09:11:49.655312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.800 [2024-11-20 09:11:49.655319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.800 [2024-11-20 09:11:49.655330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:18:10.800 [2024-11-20 09:11:49.655337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.800 [2024-11-20 09:11:49.684658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.800 [2024-11-20 09:11:49.684691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.800 [2024-11-20 09:11:49.684702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.268 ms 00:18:10.800 [2024-11-20 09:11:49.684709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.800 [2024-11-20 09:11:49.684830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.800 [2024-11-20 09:11:49.684843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:10.800 [2024-11-20 09:11:49.684852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:10.800 [2024-11-20 09:11:49.684860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.725718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.725755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:11.061 [2024-11-20 09:11:49.725766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.815 ms 00:18:11.061 [2024-11-20 09:11:49.725777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.725883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.725896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:11.061 [2024-11-20 09:11:49.725906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:11.061 [2024-11-20 09:11:49.725914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.726339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.726369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:11.061 [2024-11-20 09:11:49.726378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:18:11.061 [2024-11-20 09:11:49.726392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.726534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.726549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:11.061 [2024-11-20 09:11:49.726558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:18:11.061 [2024-11-20 09:11:49.726567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.741519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.741549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:11.061 [2024-11-20 09:11:49.741558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.932 ms 00:18:11.061 [2024-11-20 09:11:49.741566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.755212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:11.061 [2024-11-20 09:11:49.755245] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:11.061 [2024-11-20 09:11:49.755257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.755266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:11.061 [2024-11-20 09:11:49.755275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.596 ms 00:18:11.061 [2024-11-20 09:11:49.755283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.780056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.780249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:11.061 [2024-11-20 09:11:49.780267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.704 ms 00:18:11.061 [2024-11-20 09:11:49.780274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.792789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.792827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:11.061 [2024-11-20 09:11:49.792839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.142 ms 00:18:11.061 [2024-11-20 09:11:49.792847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.804694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.804725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:11.061 [2024-11-20 09:11:49.804736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.768 ms 00:18:11.061 [2024-11-20 09:11:49.804743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.805383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.805410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:11.061 [2024-11-20 09:11:49.805420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:18:11.061 [2024-11-20 09:11:49.805429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.865236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.865272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:11.061 [2024-11-20 09:11:49.865285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.783 ms 00:18:11.061 [2024-11-20 09:11:49.865293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.875901] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:11.061 [2024-11-20 09:11:49.892612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.892645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:11.061 [2024-11-20 09:11:49.892658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.251 ms 00:18:11.061 [2024-11-20 09:11:49.892666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.892741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.892752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:11.061 [2024-11-20 09:11:49.892761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:11.061 [2024-11-20 09:11:49.892769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.892817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.892827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:11.061 [2024-11-20 09:11:49.892835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:11.061 [2024-11-20 09:11:49.892843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.892894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.892906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:11.061 [2024-11-20 09:11:49.892914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:11.061 [2024-11-20 09:11:49.892922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.892956] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:11.061 [2024-11-20 09:11:49.892966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.892974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:11.061 [2024-11-20 09:11:49.892983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:11.061 [2024-11-20 09:11:49.892991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.917163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.917197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:11.061 [2024-11-20 09:11:49.917210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.140 ms 00:18:11.061 [2024-11-20 09:11:49.917218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.917311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:11.061 [2024-11-20 09:11:49.917322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:11.061 [2024-11-20 09:11:49.917330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:11.061 [2024-11-20 09:11:49.917338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:11.061 [2024-11-20 09:11:49.918300] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:11.061 [2024-11-20 09:11:49.921221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.804 ms, result 0 00:18:11.061 [2024-11-20 09:11:49.922271] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:11.061 [2024-11-20 09:11:49.935267] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.449  [2024-11-20T09:11:51.942Z] Copying: 19/256 [MB] (19 MBps) [2024-11-20T09:11:53.331Z] Copying: 30/256 [MB] (11 MBps) [2024-11-20T09:11:54.276Z] Copying: 41/256 [MB] (10 MBps) [2024-11-20T09:11:55.220Z] Copying: 52/256 [MB] (11 MBps) [2024-11-20T09:11:56.165Z] Copying: 62/256 [MB] (10 MBps) [2024-11-20T09:11:57.108Z] Copying: 73/256 [MB] (10 MBps) [2024-11-20T09:11:58.053Z] Copying: 84/256 [MB] (10 MBps) [2024-11-20T09:11:58.995Z] Copying: 95/256 [MB] (11 MBps) [2024-11-20T09:12:00.384Z] Copying: 106/256 [MB] (11 MBps) [2024-11-20T09:12:00.959Z] Copying: 118/256 [MB] (11 MBps) [2024-11-20T09:12:02.345Z] Copying: 128/256 [MB] (10 MBps) [2024-11-20T09:12:03.290Z] Copying: 139/256 [MB] (10 MBps) [2024-11-20T09:12:04.232Z] Copying: 149/256 [MB] (10 MBps) [2024-11-20T09:12:05.178Z] Copying: 161/256 [MB] (11 MBps) [2024-11-20T09:12:06.122Z] Copying: 172/256 [MB] (11 MBps) [2024-11-20T09:12:07.068Z] Copying: 184/256 [MB] (11 MBps) [2024-11-20T09:12:08.011Z] Copying: 194/256 [MB] (10 MBps) [2024-11-20T09:12:08.956Z] Copying: 205/256 [MB] (10 MBps) [2024-11-20T09:12:10.344Z] Copying: 216/256 [MB] (11 MBps) [2024-11-20T09:12:11.284Z] Copying: 226/256 [MB] (10 MBps) [2024-11-20T09:12:11.547Z] Copying: 249/256 [MB] (22 MBps) [2024-11-20T09:12:11.547Z] Copying: 256/256 [MB] (average 11 MBps)[2024-11-20 09:12:11.305850] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.628 [2024-11-20 09:12:11.315847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.315902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.628 [2024-11-20 09:12:11.315917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:32.628 [2024-11-20 09:12:11.315932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.315956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:32.628 [2024-11-20 09:12:11.318793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.318826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.628 [2024-11-20 09:12:11.318837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:18:32.628 [2024-11-20 09:12:11.318844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.319114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.319124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.628 [2024-11-20 09:12:11.319133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:32.628 [2024-11-20 09:12:11.319141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.322842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.322994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.628 [2024-11-20 09:12:11.323010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:18:32.628 [2024-11-20 09:12:11.323018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.329921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.330054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.628 [2024-11-20 09:12:11.330072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.880 ms 00:18:32.628 [2024-11-20 09:12:11.330080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.355329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.355376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.628 [2024-11-20 09:12:11.355389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.185 ms 00:18:32.628 [2024-11-20 09:12:11.355397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.371755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.371811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.628 [2024-11-20 09:12:11.371824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.306 ms 00:18:32.628 [2024-11-20 09:12:11.371836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.372014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.372027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.628 [2024-11-20 09:12:11.372036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:32.628 [2024-11-20 09:12:11.372044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.398280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.398325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:32.628 [2024-11-20 09:12:11.398336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.210 ms 00:18:32.628 [2024-11-20 09:12:11.398343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.423555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.423598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:32.628 [2024-11-20 09:12:11.423610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.152 ms 00:18:32.628 [2024-11-20 09:12:11.423616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.449208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.449392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:32.628 [2024-11-20 09:12:11.449413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.529 ms 00:18:32.628 [2024-11-20 09:12:11.449420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.474100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.628 [2024-11-20 09:12:11.474152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:32.628 [2024-11-20 09:12:11.474167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.306 ms 00:18:32.628 [2024-11-20 09:12:11.474174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.628 [2024-11-20 09:12:11.474223] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:32.628 [2024-11-20 09:12:11.474240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:32.628 [2024-11-20 09:12:11.474447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.474986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.475004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.475013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.475021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.475029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.475036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:32.629 [2024-11-20 09:12:11.475052] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:32.629 [2024-11-20 09:12:11.475060] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:18:32.629 [2024-11-20 09:12:11.475069] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:32.629 [2024-11-20 09:12:11.475076] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:32.629 [2024-11-20 09:12:11.475084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:32.629 [2024-11-20 09:12:11.475093] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:32.629 [2024-11-20 09:12:11.475100] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:32.629 [2024-11-20 09:12:11.475107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:32.629 [2024-11-20 09:12:11.475115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:32.629 [2024-11-20 09:12:11.475121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:32.629 [2024-11-20 09:12:11.475127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:32.629 [2024-11-20 09:12:11.475135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.629 [2024-11-20 09:12:11.475146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:32.629 [2024-11-20 09:12:11.475155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.912 ms 00:18:32.629 [2024-11-20 09:12:11.475163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.629 [2024-11-20 09:12:11.488703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.629 [2024-11-20 09:12:11.488932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:32.629 [2024-11-20 09:12:11.488954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.507 ms 00:18:32.629 [2024-11-20 09:12:11.488963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.630 [2024-11-20 09:12:11.489371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.630 [2024-11-20 09:12:11.489382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:32.630 [2024-11-20 09:12:11.489392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:32.630 [2024-11-20 09:12:11.489400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.630 [2024-11-20 09:12:11.528214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.630 [2024-11-20 09:12:11.528391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.630 [2024-11-20 09:12:11.528412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.630 [2024-11-20 09:12:11.528420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.630 [2024-11-20 09:12:11.528515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.630 [2024-11-20 09:12:11.528524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.630 [2024-11-20 09:12:11.528533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.630 [2024-11-20 09:12:11.528541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.630 [2024-11-20 09:12:11.528598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.630 [2024-11-20 09:12:11.528608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.630 [2024-11-20 09:12:11.528617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.630 [2024-11-20 09:12:11.528625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.630 [2024-11-20 09:12:11.528644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.630 [2024-11-20 09:12:11.528657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.630 [2024-11-20 09:12:11.528665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.630 [2024-11-20 09:12:11.528672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.613703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.613760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.890 [2024-11-20 09:12:11.613774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.613782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.890 [2024-11-20 09:12:11.684386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:32.890 [2024-11-20 09:12:11.684478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:32.890 [2024-11-20 09:12:11.684542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:32.890 [2024-11-20 09:12:11.684667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:32.890 [2024-11-20 09:12:11.684728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:32.890 [2024-11-20 09:12:11.684804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.684865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:32.890 [2024-11-20 09:12:11.684917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:32.890 [2024-11-20 09:12:11.684929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:32.890 [2024-11-20 09:12:11.684937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.890 [2024-11-20 09:12:11.685127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 369.242 ms, result 0 00:18:33.884 00:18:33.884 00:18:33.884 09:12:12 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:33.884 09:12:12 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:34.156 09:12:12 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:34.418 [2024-11-20 09:12:13.078143] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:34.418 [2024-11-20 09:12:13.078287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74462 ] 00:18:34.418 [2024-11-20 09:12:13.243157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.679 [2024-11-20 09:12:13.361506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.942 [2024-11-20 09:12:13.649413] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.942 [2024-11-20 09:12:13.649491] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.942 [2024-11-20 09:12:13.812259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.812318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:34.942 [2024-11-20 09:12:13.812334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:34.942 [2024-11-20 09:12:13.812343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.815362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.815559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:34.942 [2024-11-20 09:12:13.815580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.997 ms 00:18:34.942 [2024-11-20 09:12:13.815588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.816118] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:34.942 [2024-11-20 09:12:13.817005] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:34.942 [2024-11-20 09:12:13.817046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.817057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:34.942 [2024-11-20 09:12:13.817068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:18:34.942 [2024-11-20 09:12:13.817076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.818809] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:34.942 [2024-11-20 09:12:13.833259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.833302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:34.942 [2024-11-20 09:12:13.833316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.452 ms 00:18:34.942 [2024-11-20 09:12:13.833324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.833446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.833459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:34.942 [2024-11-20 09:12:13.833469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:34.942 [2024-11-20 09:12:13.833478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.842161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.842204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:34.942 [2024-11-20 09:12:13.842215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.637 ms 00:18:34.942 [2024-11-20 09:12:13.842224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.842333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.842345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:34.942 [2024-11-20 09:12:13.842354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:34.942 [2024-11-20 09:12:13.842362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.842391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.942 [2024-11-20 09:12:13.842404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:34.942 [2024-11-20 09:12:13.842413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:34.942 [2024-11-20 09:12:13.842421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.942 [2024-11-20 09:12:13.842443] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:34.942 [2024-11-20 09:12:13.846510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.943 [2024-11-20 09:12:13.846546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:34.943 [2024-11-20 09:12:13.846558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.073 ms 00:18:34.943 [2024-11-20 09:12:13.846566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.943 [2024-11-20 09:12:13.846639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.943 [2024-11-20 09:12:13.846649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:34.943 [2024-11-20 09:12:13.846659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:34.943 [2024-11-20 09:12:13.846667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.943 [2024-11-20 09:12:13.846686] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:34.943 [2024-11-20 09:12:13.846711] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:34.943 [2024-11-20 09:12:13.846747] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:34.943 [2024-11-20 09:12:13.846763] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:34.943 [2024-11-20 09:12:13.846885] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:34.943 [2024-11-20 09:12:13.846896] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:34.943 [2024-11-20 09:12:13.846908] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:34.943 [2024-11-20 09:12:13.846918] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:34.943 [2024-11-20 09:12:13.846931] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:34.943 [2024-11-20 09:12:13.846939] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:34.943 [2024-11-20 09:12:13.846947] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:34.943 [2024-11-20 09:12:13.846955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:34.943 [2024-11-20 09:12:13.846963] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:34.943 [2024-11-20 09:12:13.846971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.943 [2024-11-20 09:12:13.846979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:34.943 [2024-11-20 09:12:13.846987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:18:34.943 [2024-11-20 09:12:13.846995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.943 [2024-11-20 09:12:13.847083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.943 [2024-11-20 09:12:13.847092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:34.943 [2024-11-20 09:12:13.847102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:34.943 [2024-11-20 09:12:13.847109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:34.943 [2024-11-20 09:12:13.847213] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:34.943 [2024-11-20 09:12:13.847223] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:34.943 [2024-11-20 09:12:13.847232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:34.943 [2024-11-20 09:12:13.847258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:34.943 [2024-11-20 09:12:13.847281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.943 [2024-11-20 09:12:13.847294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:34.943 [2024-11-20 09:12:13.847301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:34.943 [2024-11-20 09:12:13.847308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:34.943 [2024-11-20 09:12:13.847322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:34.943 [2024-11-20 09:12:13.847329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:34.943 [2024-11-20 09:12:13.847336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:34.943 [2024-11-20 09:12:13.847349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:34.943 [2024-11-20 09:12:13.847369] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:34.943 [2024-11-20 09:12:13.847390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:34.943 [2024-11-20 09:12:13.847411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:34.943 [2024-11-20 09:12:13.847431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:34.943 [2024-11-20 09:12:13.847451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.943 [2024-11-20 09:12:13.847464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:34.943 [2024-11-20 09:12:13.847470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:34.943 [2024-11-20 09:12:13.847478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:34.943 [2024-11-20 09:12:13.847485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:34.943 [2024-11-20 09:12:13.847491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:34.943 [2024-11-20 09:12:13.847498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:34.943 [2024-11-20 09:12:13.847511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:34.943 [2024-11-20 09:12:13.847518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847524] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:34.943 [2024-11-20 09:12:13.847532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:34.943 [2024-11-20 09:12:13.847539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:34.943 [2024-11-20 09:12:13.847557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:34.943 [2024-11-20 09:12:13.847564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:34.943 [2024-11-20 09:12:13.847570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:34.943 [2024-11-20 09:12:13.847577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:34.943 [2024-11-20 09:12:13.847583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:34.943 [2024-11-20 09:12:13.847590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:34.943 [2024-11-20 09:12:13.847600] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:34.943 [2024-11-20 09:12:13.847608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.943 [2024-11-20 09:12:13.847617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:34.943 [2024-11-20 09:12:13.847625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:34.943 [2024-11-20 09:12:13.847633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:34.943 [2024-11-20 09:12:13.847640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:34.943 [2024-11-20 09:12:13.847647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:34.943 [2024-11-20 09:12:13.847655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:34.943 [2024-11-20 09:12:13.847662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:34.943 [2024-11-20 09:12:13.847669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:34.943 [2024-11-20 09:12:13.847676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:34.943 [2024-11-20 09:12:13.847683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:34.943 [2024-11-20 09:12:13.847690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:34.943 [2024-11-20 09:12:13.847697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:34.943 [2024-11-20 09:12:13.847704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:34.943 [2024-11-20 09:12:13.847713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:34.943 [2024-11-20 09:12:13.847720] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:34.943 [2024-11-20 09:12:13.847729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:34.944 [2024-11-20 09:12:13.847737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:34.944 [2024-11-20 09:12:13.847744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:34.944 [2024-11-20 09:12:13.847752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:34.944 [2024-11-20 09:12:13.847759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:34.944 [2024-11-20 09:12:13.847766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:34.944 [2024-11-20 09:12:13.847774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:34.944 [2024-11-20 09:12:13.847785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:18:34.944 [2024-11-20 09:12:13.847793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.880091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.880292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.206 [2024-11-20 09:12:13.880314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.246 ms 00:18:35.206 [2024-11-20 09:12:13.880323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.880472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.880488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.206 [2024-11-20 09:12:13.880498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:35.206 [2024-11-20 09:12:13.880506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.932482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.932540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.206 [2024-11-20 09:12:13.932556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.950 ms 00:18:35.206 [2024-11-20 09:12:13.932568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.932695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.932708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.206 [2024-11-20 09:12:13.932718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:35.206 [2024-11-20 09:12:13.932727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.933324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.933359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.206 [2024-11-20 09:12:13.933371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:18:35.206 [2024-11-20 09:12:13.933388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.933553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.933570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.206 [2024-11-20 09:12:13.933579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:18:35.206 [2024-11-20 09:12:13.933587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.949828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.949897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.206 [2024-11-20 09:12:13.949909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.217 ms 00:18:35.206 [2024-11-20 09:12:13.949918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.964324] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:35.206 [2024-11-20 09:12:13.964505] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.206 [2024-11-20 09:12:13.964525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.964534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.206 [2024-11-20 09:12:13.964543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.487 ms 00:18:35.206 [2024-11-20 09:12:13.964550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:13.990395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:13.990453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:35.206 [2024-11-20 09:12:13.990465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.756 ms 00:18:35.206 [2024-11-20 09:12:13.990474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.003489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.003535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:35.206 [2024-11-20 09:12:14.003551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.917 ms 00:18:35.206 [2024-11-20 09:12:14.003560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.016195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.016241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:35.206 [2024-11-20 09:12:14.016254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.540 ms 00:18:35.206 [2024-11-20 09:12:14.016262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.016964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.016994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.206 [2024-11-20 09:12:14.017005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.577 ms 00:18:35.206 [2024-11-20 09:12:14.017014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.081457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.081519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:35.206 [2024-11-20 09:12:14.081534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.411 ms 00:18:35.206 [2024-11-20 09:12:14.081543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.092860] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:35.206 [2024-11-20 09:12:14.111572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.111622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.206 [2024-11-20 09:12:14.111636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.931 ms 00:18:35.206 [2024-11-20 09:12:14.111645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.111745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.111757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:35.206 [2024-11-20 09:12:14.111766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:35.206 [2024-11-20 09:12:14.111775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.111832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.111842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.206 [2024-11-20 09:12:14.111851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:35.206 [2024-11-20 09:12:14.111859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.111925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.111938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.206 [2024-11-20 09:12:14.111948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.206 [2024-11-20 09:12:14.111956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.206 [2024-11-20 09:12:14.111993] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:35.206 [2024-11-20 09:12:14.112004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.206 [2024-11-20 09:12:14.112014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:35.206 [2024-11-20 09:12:14.112044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:35.206 [2024-11-20 09:12:14.112053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.468 [2024-11-20 09:12:14.137818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.468 [2024-11-20 09:12:14.137887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.468 [2024-11-20 09:12:14.137902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.736 ms 00:18:35.468 [2024-11-20 09:12:14.137911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.468 [2024-11-20 09:12:14.138058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.468 [2024-11-20 09:12:14.138071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.468 [2024-11-20 09:12:14.138081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:35.468 [2024-11-20 09:12:14.138089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.468 [2024-11-20 09:12:14.139594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.468 [2024-11-20 09:12:14.143136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.000 ms, result 0 00:18:35.468 [2024-11-20 09:12:14.144489] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.468 [2024-11-20 09:12:14.158067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.731  [2024-11-20T09:12:14.650Z] Copying: 4096/4096 [kB] (average 14 MBps)[2024-11-20 09:12:14.440802] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.731 [2024-11-20 09:12:14.449974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.450021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.731 [2024-11-20 09:12:14.450034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.731 [2024-11-20 09:12:14.450049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.450073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:35.731 [2024-11-20 09:12:14.453123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.453158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.731 [2024-11-20 09:12:14.453170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.037 ms 00:18:35.731 [2024-11-20 09:12:14.453177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.455968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.456140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.731 [2024-11-20 09:12:14.456160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.763 ms 00:18:35.731 [2024-11-20 09:12:14.456169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.460582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.460624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.731 [2024-11-20 09:12:14.460634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.392 ms 00:18:35.731 [2024-11-20 09:12:14.460642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.467600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.467778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.731 [2024-11-20 09:12:14.467797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.926 ms 00:18:35.731 [2024-11-20 09:12:14.467805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.493237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.493285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.731 [2024-11-20 09:12:14.493297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.381 ms 00:18:35.731 [2024-11-20 09:12:14.493304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.509623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.509677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.731 [2024-11-20 09:12:14.509694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.268 ms 00:18:35.731 [2024-11-20 09:12:14.509702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.509850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.509862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.731 [2024-11-20 09:12:14.509902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:35.731 [2024-11-20 09:12:14.509911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.535792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.535838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:35.731 [2024-11-20 09:12:14.535850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.855 ms 00:18:35.731 [2024-11-20 09:12:14.535857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.561043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.561092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:35.731 [2024-11-20 09:12:14.561104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.106 ms 00:18:35.731 [2024-11-20 09:12:14.561111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.585678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.585719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.731 [2024-11-20 09:12:14.585730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.515 ms 00:18:35.731 [2024-11-20 09:12:14.585737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.610541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.731 [2024-11-20 09:12:14.610584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.731 [2024-11-20 09:12:14.610594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.727 ms 00:18:35.731 [2024-11-20 09:12:14.610601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.731 [2024-11-20 09:12:14.610648] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.731 [2024-11-20 09:12:14.610663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.731 [2024-11-20 09:12:14.610849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.610992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.732 [2024-11-20 09:12:14.611480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.732 [2024-11-20 09:12:14.611490] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:18:35.732 [2024-11-20 09:12:14.611498] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.732 [2024-11-20 09:12:14.611506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.732 [2024-11-20 09:12:14.611514] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.732 [2024-11-20 09:12:14.611522] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.732 [2024-11-20 09:12:14.611530] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.732 [2024-11-20 09:12:14.611538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.732 [2024-11-20 09:12:14.611546] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.732 [2024-11-20 09:12:14.611552] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.732 [2024-11-20 09:12:14.611558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.732 [2024-11-20 09:12:14.611565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.732 [2024-11-20 09:12:14.611575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.732 [2024-11-20 09:12:14.611585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:18:35.732 [2024-11-20 09:12:14.611594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.732 [2024-11-20 09:12:14.624692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.732 [2024-11-20 09:12:14.624899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.732 [2024-11-20 09:12:14.624920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.066 ms 00:18:35.732 [2024-11-20 09:12:14.624928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.732 [2024-11-20 09:12:14.625334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.733 [2024-11-20 09:12:14.625346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.733 [2024-11-20 09:12:14.625355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:35.733 [2024-11-20 09:12:14.625363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.664206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.664374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.995 [2024-11-20 09:12:14.664396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.664405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.664508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.664518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.995 [2024-11-20 09:12:14.664526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.664534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.664590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.664600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.995 [2024-11-20 09:12:14.664607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.664615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.664633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.664644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.995 [2024-11-20 09:12:14.664652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.664660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.748082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.748133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.995 [2024-11-20 09:12:14.748146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.748155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.816590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.816642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.995 [2024-11-20 09:12:14.816654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.816664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.816743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.816752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.995 [2024-11-20 09:12:14.816761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.816770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.816803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.816813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.995 [2024-11-20 09:12:14.816828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.816837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.816970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.816982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.995 [2024-11-20 09:12:14.816991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.816999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.817034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.817045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.995 [2024-11-20 09:12:14.817054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.817066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.817109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.817120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.995 [2024-11-20 09:12:14.817128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.817136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.817185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.995 [2024-11-20 09:12:14.817196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.995 [2024-11-20 09:12:14.817207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.995 [2024-11-20 09:12:14.817216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.995 [2024-11-20 09:12:14.817369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.382 ms, result 0 00:18:36.940 00:18:36.940 00:18:36.941 09:12:15 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74487 00:18:36.941 09:12:15 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74487 00:18:36.941 09:12:15 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:36.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.941 09:12:15 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 74487 ']' 00:18:36.941 09:12:15 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.941 09:12:15 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.941 09:12:15 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.941 09:12:15 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.941 09:12:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:36.941 [2024-11-20 09:12:15.667050] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:36.941 [2024-11-20 09:12:15.667201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74487 ] 00:18:36.941 [2024-11-20 09:12:15.828945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.202 [2024-11-20 09:12:15.953996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.774 09:12:16 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.774 09:12:16 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:18:37.774 09:12:16 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:38.036 [2024-11-20 09:12:16.860432] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.036 [2024-11-20 09:12:16.860510] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:38.299 [2024-11-20 09:12:17.039673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.039927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:38.299 [2024-11-20 09:12:17.039956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.299 [2024-11-20 09:12:17.039966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.042923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.043099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:38.299 [2024-11-20 09:12:17.043122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.927 ms 00:18:38.299 [2024-11-20 09:12:17.043131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.043632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:38.299 [2024-11-20 09:12:17.044493] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:38.299 [2024-11-20 09:12:17.044539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.044549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:38.299 [2024-11-20 09:12:17.044562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:18:38.299 [2024-11-20 09:12:17.044570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.046371] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:38.299 [2024-11-20 09:12:17.060474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.060529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:38.299 [2024-11-20 09:12:17.060543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.112 ms 00:18:38.299 [2024-11-20 09:12:17.060554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.060669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.060684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:38.299 [2024-11-20 09:12:17.060693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:38.299 [2024-11-20 09:12:17.060702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.069121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.069283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:38.299 [2024-11-20 09:12:17.069354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.366 ms 00:18:38.299 [2024-11-20 09:12:17.069369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.069492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.069506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:38.299 [2024-11-20 09:12:17.069516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:18:38.299 [2024-11-20 09:12:17.069525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.069559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.069570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:38.299 [2024-11-20 09:12:17.069578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:38.299 [2024-11-20 09:12:17.069588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.069612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:38.299 [2024-11-20 09:12:17.073520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.073559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:38.299 [2024-11-20 09:12:17.073573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.911 ms 00:18:38.299 [2024-11-20 09:12:17.073582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.073676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.073686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:38.299 [2024-11-20 09:12:17.073697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:38.299 [2024-11-20 09:12:17.073708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.073732] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:38.299 [2024-11-20 09:12:17.073753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:38.299 [2024-11-20 09:12:17.073798] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:38.299 [2024-11-20 09:12:17.073814] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:38.299 [2024-11-20 09:12:17.073947] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:38.299 [2024-11-20 09:12:17.073960] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:38.299 [2024-11-20 09:12:17.073976] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:38.299 [2024-11-20 09:12:17.073989] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:38.299 [2024-11-20 09:12:17.074001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:38.299 [2024-11-20 09:12:17.074010] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:38.299 [2024-11-20 09:12:17.074021] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:38.299 [2024-11-20 09:12:17.074028] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:38.299 [2024-11-20 09:12:17.074039] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:38.299 [2024-11-20 09:12:17.074047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.074056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:38.299 [2024-11-20 09:12:17.074065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:18:38.299 [2024-11-20 09:12:17.074074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.074163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.299 [2024-11-20 09:12:17.074173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:38.299 [2024-11-20 09:12:17.074182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:38.299 [2024-11-20 09:12:17.074191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.299 [2024-11-20 09:12:17.074292] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:38.299 [2024-11-20 09:12:17.074305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:38.299 [2024-11-20 09:12:17.074313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.299 [2024-11-20 09:12:17.074322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.299 [2024-11-20 09:12:17.074330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:38.299 [2024-11-20 09:12:17.074340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:38.299 [2024-11-20 09:12:17.074347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:38.299 [2024-11-20 09:12:17.074360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:38.299 [2024-11-20 09:12:17.074367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:38.299 [2024-11-20 09:12:17.074375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.299 [2024-11-20 09:12:17.074382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:38.299 [2024-11-20 09:12:17.074392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:38.299 [2024-11-20 09:12:17.074400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:38.299 [2024-11-20 09:12:17.074409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:38.299 [2024-11-20 09:12:17.074416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:38.299 [2024-11-20 09:12:17.074425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:38.300 [2024-11-20 09:12:17.074440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:38.300 [2024-11-20 09:12:17.074469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:38.300 [2024-11-20 09:12:17.074494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:38.300 [2024-11-20 09:12:17.074518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:38.300 [2024-11-20 09:12:17.074541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:38.300 [2024-11-20 09:12:17.074562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.300 [2024-11-20 09:12:17.074579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:38.300 [2024-11-20 09:12:17.074588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:38.300 [2024-11-20 09:12:17.074595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:38.300 [2024-11-20 09:12:17.074604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:38.300 [2024-11-20 09:12:17.074610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:38.300 [2024-11-20 09:12:17.074620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:38.300 [2024-11-20 09:12:17.074635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:38.300 [2024-11-20 09:12:17.074642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074652] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:38.300 [2024-11-20 09:12:17.074660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:38.300 [2024-11-20 09:12:17.074672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:38.300 [2024-11-20 09:12:17.074690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:38.300 [2024-11-20 09:12:17.074697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:38.300 [2024-11-20 09:12:17.074705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:38.300 [2024-11-20 09:12:17.074712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:38.300 [2024-11-20 09:12:17.074720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:38.300 [2024-11-20 09:12:17.074727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:38.300 [2024-11-20 09:12:17.074737] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:38.300 [2024-11-20 09:12:17.074747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:38.300 [2024-11-20 09:12:17.074766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:38.300 [2024-11-20 09:12:17.074777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:38.300 [2024-11-20 09:12:17.074784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:38.300 [2024-11-20 09:12:17.074794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:38.300 [2024-11-20 09:12:17.074801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:38.300 [2024-11-20 09:12:17.074810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:38.300 [2024-11-20 09:12:17.074817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:38.300 [2024-11-20 09:12:17.074826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:38.300 [2024-11-20 09:12:17.074834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:38.300 [2024-11-20 09:12:17.074891] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:38.300 [2024-11-20 09:12:17.074899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074911] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:38.300 [2024-11-20 09:12:17.074919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:38.300 [2024-11-20 09:12:17.074929] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:38.300 [2024-11-20 09:12:17.074937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:38.300 [2024-11-20 09:12:17.074947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.074955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:38.300 [2024-11-20 09:12:17.074965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:18:38.300 [2024-11-20 09:12:17.074973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.107330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.107378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:38.300 [2024-11-20 09:12:17.107392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.294 ms 00:18:38.300 [2024-11-20 09:12:17.107400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.107537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.107548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:38.300 [2024-11-20 09:12:17.107558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:38.300 [2024-11-20 09:12:17.107566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.142855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.142912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:38.300 [2024-11-20 09:12:17.142932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.263 ms 00:18:38.300 [2024-11-20 09:12:17.142940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.143056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.143067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:38.300 [2024-11-20 09:12:17.143079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:38.300 [2024-11-20 09:12:17.143087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.143630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.143668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:38.300 [2024-11-20 09:12:17.143684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:18:38.300 [2024-11-20 09:12:17.143692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.143847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.143857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:38.300 [2024-11-20 09:12:17.143867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:38.300 [2024-11-20 09:12:17.143892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.161971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.163022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:38.300 [2024-11-20 09:12:17.163053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.052 ms 00:18:38.300 [2024-11-20 09:12:17.163063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.177270] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:38.300 [2024-11-20 09:12:17.177317] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:38.300 [2024-11-20 09:12:17.177332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.177341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:38.300 [2024-11-20 09:12:17.177352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.142 ms 00:18:38.300 [2024-11-20 09:12:17.177360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.300 [2024-11-20 09:12:17.202817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.300 [2024-11-20 09:12:17.202999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:38.301 [2024-11-20 09:12:17.203026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.361 ms 00:18:38.301 [2024-11-20 09:12:17.203036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.215787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.215831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:38.563 [2024-11-20 09:12:17.215848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.656 ms 00:18:38.563 [2024-11-20 09:12:17.215856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.228353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.228396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:38.563 [2024-11-20 09:12:17.228410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.383 ms 00:18:38.563 [2024-11-20 09:12:17.228418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.229134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.229160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:38.563 [2024-11-20 09:12:17.229173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:18:38.563 [2024-11-20 09:12:17.229181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.312421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.312498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:38.563 [2024-11-20 09:12:17.312521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.207 ms 00:18:38.563 [2024-11-20 09:12:17.312531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.323908] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:38.563 [2024-11-20 09:12:17.342758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.342814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:38.563 [2024-11-20 09:12:17.342831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.113 ms 00:18:38.563 [2024-11-20 09:12:17.342842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.342963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.342977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:38.563 [2024-11-20 09:12:17.342987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:38.563 [2024-11-20 09:12:17.343014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.343074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.343086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:38.563 [2024-11-20 09:12:17.343095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:38.563 [2024-11-20 09:12:17.343105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.343134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.343145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:38.563 [2024-11-20 09:12:17.343153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:38.563 [2024-11-20 09:12:17.343167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.343204] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:38.563 [2024-11-20 09:12:17.343220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.343229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:38.563 [2024-11-20 09:12:17.343244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:38.563 [2024-11-20 09:12:17.343252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.369649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.369700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:38.563 [2024-11-20 09:12:17.369718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.366 ms 00:18:38.563 [2024-11-20 09:12:17.369727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.369900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.563 [2024-11-20 09:12:17.369913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:38.563 [2024-11-20 09:12:17.369926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:38.563 [2024-11-20 09:12:17.369937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.563 [2024-11-20 09:12:17.370990] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:38.563 [2024-11-20 09:12:17.374447] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.961 ms, result 0 00:18:38.563 [2024-11-20 09:12:17.376506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.563 Some configs were skipped because the RPC state that can call them passed over. 00:18:38.563 09:12:17 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:38.825 [2024-11-20 09:12:17.625329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.825 [2024-11-20 09:12:17.625533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:38.825 [2024-11-20 09:12:17.625602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.229 ms 00:18:38.825 [2024-11-20 09:12:17.625630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.825 [2024-11-20 09:12:17.625687] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.588 ms, result 0 00:18:38.825 true 00:18:38.825 09:12:17 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:39.087 [2024-11-20 09:12:17.841249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.087 [2024-11-20 09:12:17.841433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:39.087 [2024-11-20 09:12:17.841459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.868 ms 00:18:39.087 [2024-11-20 09:12:17.841468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.087 [2024-11-20 09:12:17.841514] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.137 ms, result 0 00:18:39.087 true 00:18:39.087 09:12:17 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74487 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74487 ']' 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74487 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74487 00:18:39.087 killing process with pid 74487 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74487' 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 74487 00:18:39.087 09:12:17 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 74487 00:18:39.655 [2024-11-20 09:12:18.489151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.655 [2024-11-20 09:12:18.489199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:39.655 [2024-11-20 09:12:18.489209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:39.655 [2024-11-20 09:12:18.489216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.655 [2024-11-20 09:12:18.489234] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:39.655 [2024-11-20 09:12:18.491288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.655 [2024-11-20 09:12:18.491315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:39.655 [2024-11-20 09:12:18.491326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:18:39.655 [2024-11-20 09:12:18.491332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.655 [2024-11-20 09:12:18.491551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.655 [2024-11-20 09:12:18.491557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:39.655 [2024-11-20 09:12:18.491565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:18:39.655 [2024-11-20 09:12:18.491571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.655 [2024-11-20 09:12:18.494737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.655 [2024-11-20 09:12:18.494762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.656 [2024-11-20 09:12:18.494772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:18:39.656 [2024-11-20 09:12:18.494778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.500005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.500143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.656 [2024-11-20 09:12:18.500158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.197 ms 00:18:39.656 [2024-11-20 09:12:18.500164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.507429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.507525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.656 [2024-11-20 09:12:18.507541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.219 ms 00:18:39.656 [2024-11-20 09:12:18.507552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.513951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.514035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.656 [2024-11-20 09:12:18.514089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:18:39.656 [2024-11-20 09:12:18.514106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.514220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.514239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.656 [2024-11-20 09:12:18.514256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:39.656 [2024-11-20 09:12:18.514298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.522153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.522239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:39.656 [2024-11-20 09:12:18.522285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.827 ms 00:18:39.656 [2024-11-20 09:12:18.522302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.529445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.529580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:39.656 [2024-11-20 09:12:18.529623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.106 ms 00:18:39.656 [2024-11-20 09:12:18.529655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.536585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.536664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.656 [2024-11-20 09:12:18.536706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.883 ms 00:18:39.656 [2024-11-20 09:12:18.536722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.543682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.656 [2024-11-20 09:12:18.543761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.656 [2024-11-20 09:12:18.543801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.904 ms 00:18:39.656 [2024-11-20 09:12:18.543817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.656 [2024-11-20 09:12:18.543856] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.656 [2024-11-20 09:12:18.543922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.543951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.543990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.544974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.545994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.546017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.546040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.546080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.546104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.656 [2024-11-20 09:12:18.546150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.546914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.657 [2024-11-20 09:12:18.547545] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.657 [2024-11-20 09:12:18.547556] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:18:39.657 [2024-11-20 09:12:18.547566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.657 [2024-11-20 09:12:18.547575] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.657 [2024-11-20 09:12:18.547581] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.657 [2024-11-20 09:12:18.547588] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.657 [2024-11-20 09:12:18.547594] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.657 [2024-11-20 09:12:18.547601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.657 [2024-11-20 09:12:18.547606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.657 [2024-11-20 09:12:18.547612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.657 [2024-11-20 09:12:18.547617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.657 [2024-11-20 09:12:18.547625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.657 [2024-11-20 09:12:18.547632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.657 [2024-11-20 09:12:18.547641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.770 ms 00:18:39.657 [2024-11-20 09:12:18.547647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.657 [2024-11-20 09:12:18.557376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.657 [2024-11-20 09:12:18.557461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.657 [2024-11-20 09:12:18.557476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.699 ms 00:18:39.657 [2024-11-20 09:12:18.557482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.657 [2024-11-20 09:12:18.557775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.657 [2024-11-20 09:12:18.557790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.657 [2024-11-20 09:12:18.557799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:39.657 [2024-11-20 09:12:18.557806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.592340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.592425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.918 [2024-11-20 09:12:18.592466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.592483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.592567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.592612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.918 [2024-11-20 09:12:18.592632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.592649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.592715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.592735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.918 [2024-11-20 09:12:18.592778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.592795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.592820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.592854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.918 [2024-11-20 09:12:18.592894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.592910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.651072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.651189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.918 [2024-11-20 09:12:18.651288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.651312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.701583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.701692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.918 [2024-11-20 09:12:18.701736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.701755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.701825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.701844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.918 [2024-11-20 09:12:18.701862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.701960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.701993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.702009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.918 [2024-11-20 09:12:18.702027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.702069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.702156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.702207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.918 [2024-11-20 09:12:18.702226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.702259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.702301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.702318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.918 [2024-11-20 09:12:18.702334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.702349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.702418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.702439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.918 [2024-11-20 09:12:18.702456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.702470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.702514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.918 [2024-11-20 09:12:18.702559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.918 [2024-11-20 09:12:18.702578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.918 [2024-11-20 09:12:18.702592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.918 [2024-11-20 09:12:18.702710] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 213.539 ms, result 0 00:18:40.490 09:12:19 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.751 [2024-11-20 09:12:19.470398] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:18:40.752 [2024-11-20 09:12:19.470729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74544 ] 00:18:40.752 [2024-11-20 09:12:19.634544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.013 [2024-11-20 09:12:19.756091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.274 [2024-11-20 09:12:20.044560] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:41.274 [2024-11-20 09:12:20.044812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:41.537 [2024-11-20 09:12:20.207023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.207236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:41.537 [2024-11-20 09:12:20.207426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:41.537 [2024-11-20 09:12:20.207469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.210494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.210657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:41.537 [2024-11-20 09:12:20.210766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.978 ms 00:18:41.537 [2024-11-20 09:12:20.210778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.211006] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:41.537 [2024-11-20 09:12:20.211742] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:41.537 [2024-11-20 09:12:20.211778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.211788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:41.537 [2024-11-20 09:12:20.211798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.786 ms 00:18:41.537 [2024-11-20 09:12:20.211806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.213545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:41.537 [2024-11-20 09:12:20.227610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.227660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:41.537 [2024-11-20 09:12:20.227674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.067 ms 00:18:41.537 [2024-11-20 09:12:20.227683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.227796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.227810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:41.537 [2024-11-20 09:12:20.227820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:18:41.537 [2024-11-20 09:12:20.227828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.235733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.235914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:41.537 [2024-11-20 09:12:20.235932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.837 ms 00:18:41.537 [2024-11-20 09:12:20.235941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.236050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.236060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:41.537 [2024-11-20 09:12:20.236069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:41.537 [2024-11-20 09:12:20.236078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.236105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.236117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:41.537 [2024-11-20 09:12:20.236126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:41.537 [2024-11-20 09:12:20.236133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.236154] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:41.537 [2024-11-20 09:12:20.240143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.240180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:41.537 [2024-11-20 09:12:20.240191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.994 ms 00:18:41.537 [2024-11-20 09:12:20.240198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.240272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.240283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:41.537 [2024-11-20 09:12:20.240292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:41.537 [2024-11-20 09:12:20.240300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.240320] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:41.537 [2024-11-20 09:12:20.240345] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:41.537 [2024-11-20 09:12:20.240380] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:41.537 [2024-11-20 09:12:20.240396] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:41.537 [2024-11-20 09:12:20.240502] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:41.537 [2024-11-20 09:12:20.240513] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:41.537 [2024-11-20 09:12:20.240524] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:41.537 [2024-11-20 09:12:20.240535] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:41.537 [2024-11-20 09:12:20.240548] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:41.537 [2024-11-20 09:12:20.240556] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:41.537 [2024-11-20 09:12:20.240564] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:41.537 [2024-11-20 09:12:20.240572] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:41.537 [2024-11-20 09:12:20.240580] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:41.537 [2024-11-20 09:12:20.240589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.240596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:41.537 [2024-11-20 09:12:20.240604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:41.537 [2024-11-20 09:12:20.240612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.240702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.537 [2024-11-20 09:12:20.240711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:41.537 [2024-11-20 09:12:20.240722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:18:41.537 [2024-11-20 09:12:20.240729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.537 [2024-11-20 09:12:20.240832] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:41.537 [2024-11-20 09:12:20.240843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:41.538 [2024-11-20 09:12:20.240901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.538 [2024-11-20 09:12:20.240909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.240918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:41.538 [2024-11-20 09:12:20.240925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.240931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:41.538 [2024-11-20 09:12:20.240939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:41.538 [2024-11-20 09:12:20.240947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:41.538 [2024-11-20 09:12:20.240954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.538 [2024-11-20 09:12:20.240962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:41.538 [2024-11-20 09:12:20.240969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:41.538 [2024-11-20 09:12:20.240975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:41.538 [2024-11-20 09:12:20.240989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:41.538 [2024-11-20 09:12:20.241000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:41.538 [2024-11-20 09:12:20.241007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:41.538 [2024-11-20 09:12:20.241021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:41.538 [2024-11-20 09:12:20.241042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:41.538 [2024-11-20 09:12:20.241063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:41.538 [2024-11-20 09:12:20.241083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:41.538 [2024-11-20 09:12:20.241104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:41.538 [2024-11-20 09:12:20.241124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.538 [2024-11-20 09:12:20.241139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:41.538 [2024-11-20 09:12:20.241146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:41.538 [2024-11-20 09:12:20.241153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:41.538 [2024-11-20 09:12:20.241159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:41.538 [2024-11-20 09:12:20.241166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:41.538 [2024-11-20 09:12:20.241173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:41.538 [2024-11-20 09:12:20.241186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:41.538 [2024-11-20 09:12:20.241192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241199] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:41.538 [2024-11-20 09:12:20.241207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:41.538 [2024-11-20 09:12:20.241214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:41.538 [2024-11-20 09:12:20.241232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:41.538 [2024-11-20 09:12:20.241239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:41.538 [2024-11-20 09:12:20.241248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:41.538 [2024-11-20 09:12:20.241255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:41.538 [2024-11-20 09:12:20.241262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:41.538 [2024-11-20 09:12:20.241269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:41.538 [2024-11-20 09:12:20.241277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:41.538 [2024-11-20 09:12:20.241286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:41.538 [2024-11-20 09:12:20.241303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:41.538 [2024-11-20 09:12:20.241309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:41.538 [2024-11-20 09:12:20.241317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:41.538 [2024-11-20 09:12:20.241324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:41.538 [2024-11-20 09:12:20.241331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:41.538 [2024-11-20 09:12:20.241339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:41.538 [2024-11-20 09:12:20.241346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:41.538 [2024-11-20 09:12:20.241353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:41.538 [2024-11-20 09:12:20.241359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:41.538 [2024-11-20 09:12:20.241396] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:41.538 [2024-11-20 09:12:20.241404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:41.538 [2024-11-20 09:12:20.241419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:41.538 [2024-11-20 09:12:20.241426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:41.538 [2024-11-20 09:12:20.241433] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:41.538 [2024-11-20 09:12:20.241440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.538 [2024-11-20 09:12:20.241447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:41.538 [2024-11-20 09:12:20.241459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:18:41.538 [2024-11-20 09:12:20.241466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.538 [2024-11-20 09:12:20.273102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.538 [2024-11-20 09:12:20.273150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:41.538 [2024-11-20 09:12:20.273162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.580 ms 00:18:41.538 [2024-11-20 09:12:20.273170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.273301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.273317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:41.539 [2024-11-20 09:12:20.273326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:41.539 [2024-11-20 09:12:20.273334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.321848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.321921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:41.539 [2024-11-20 09:12:20.321936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.491 ms 00:18:41.539 [2024-11-20 09:12:20.321949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.322079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.322092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:41.539 [2024-11-20 09:12:20.322102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:41.539 [2024-11-20 09:12:20.322110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.322639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.322683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:41.539 [2024-11-20 09:12:20.322694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:18:41.539 [2024-11-20 09:12:20.322712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.322864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.322890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:41.539 [2024-11-20 09:12:20.322900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:18:41.539 [2024-11-20 09:12:20.322908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.338989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.339033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:41.539 [2024-11-20 09:12:20.339045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.058 ms 00:18:41.539 [2024-11-20 09:12:20.339053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.353408] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:41.539 [2024-11-20 09:12:20.353593] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:41.539 [2024-11-20 09:12:20.353613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.353621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:41.539 [2024-11-20 09:12:20.353631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.449 ms 00:18:41.539 [2024-11-20 09:12:20.353638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.379476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.379544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:41.539 [2024-11-20 09:12:20.379558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.750 ms 00:18:41.539 [2024-11-20 09:12:20.379567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.392168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.392215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:41.539 [2024-11-20 09:12:20.392227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.493 ms 00:18:41.539 [2024-11-20 09:12:20.392236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.404568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.404611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:41.539 [2024-11-20 09:12:20.404623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.244 ms 00:18:41.539 [2024-11-20 09:12:20.404630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.539 [2024-11-20 09:12:20.405302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.539 [2024-11-20 09:12:20.405328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:41.539 [2024-11-20 09:12:20.405339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:41.539 [2024-11-20 09:12:20.405347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.469464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.469523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:41.802 [2024-11-20 09:12:20.469539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.087 ms 00:18:41.802 [2024-11-20 09:12:20.469549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.480922] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:41.802 [2024-11-20 09:12:20.500779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.500829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.802 [2024-11-20 09:12:20.500844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.122 ms 00:18:41.802 [2024-11-20 09:12:20.500866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.500999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.501012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:41.802 [2024-11-20 09:12:20.501022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:41.802 [2024-11-20 09:12:20.501031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.501091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.501102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:41.802 [2024-11-20 09:12:20.501110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:41.802 [2024-11-20 09:12:20.501119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.501147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.501165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:41.802 [2024-11-20 09:12:20.501174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:41.802 [2024-11-20 09:12:20.501182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.501220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:41.802 [2024-11-20 09:12:20.501231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.501241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:41.802 [2024-11-20 09:12:20.501250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:41.802 [2024-11-20 09:12:20.501258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.527701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.527897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:41.802 [2024-11-20 09:12:20.527920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.422 ms 00:18:41.802 [2024-11-20 09:12:20.527928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.528079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.802 [2024-11-20 09:12:20.528093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:41.802 [2024-11-20 09:12:20.528102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:41.802 [2024-11-20 09:12:20.528110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.802 [2024-11-20 09:12:20.529211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.802 [2024-11-20 09:12:20.532522] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 321.811 ms, result 0 00:18:41.802 [2024-11-20 09:12:20.533967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.802 [2024-11-20 09:12:20.547433] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:42.747  [2024-11-20T09:12:22.607Z] Copying: 23/256 [MB] (23 MBps) [2024-11-20T09:12:24.003Z] Copying: 43/256 [MB] (20 MBps) [2024-11-20T09:12:24.948Z] Copying: 59/256 [MB] (15 MBps) [2024-11-20T09:12:25.893Z] Copying: 70/256 [MB] (11 MBps) [2024-11-20T09:12:26.837Z] Copying: 82/256 [MB] (11 MBps) [2024-11-20T09:12:27.785Z] Copying: 93/256 [MB] (11 MBps) [2024-11-20T09:12:28.832Z] Copying: 110/256 [MB] (17 MBps) [2024-11-20T09:12:29.817Z] Copying: 125/256 [MB] (14 MBps) [2024-11-20T09:12:30.763Z] Copying: 142/256 [MB] (17 MBps) [2024-11-20T09:12:31.706Z] Copying: 153/256 [MB] (11 MBps) [2024-11-20T09:12:32.651Z] Copying: 164/256 [MB] (10 MBps) [2024-11-20T09:12:34.040Z] Copying: 178048/262144 [kB] (9728 kBps) [2024-11-20T09:12:34.614Z] Copying: 187240/262144 [kB] (9192 kBps) [2024-11-20T09:12:36.002Z] Copying: 196152/262144 [kB] (8912 kBps) [2024-11-20T09:12:36.944Z] Copying: 205688/262144 [kB] (9536 kBps) [2024-11-20T09:12:37.923Z] Copying: 215352/262144 [kB] (9664 kBps) [2024-11-20T09:12:38.862Z] Copying: 225172/262144 [kB] (9820 kBps) [2024-11-20T09:12:39.804Z] Copying: 230/256 [MB] (10 MBps) [2024-11-20T09:12:40.747Z] Copying: 246016/262144 [kB] (9968 kBps) [2024-11-20T09:12:41.009Z] Copying: 250/256 [MB] (10 MBps) [2024-11-20T09:12:41.271Z] Copying: 256/256 [MB] (average 12 MBps)[2024-11-20 09:12:41.043701] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.352 [2024-11-20 09:12:41.058767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.352 [2024-11-20 09:12:41.058913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:02.352 [2024-11-20 09:12:41.059123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.352 [2024-11-20 09:12:41.059155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.352 [2024-11-20 09:12:41.059193] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:02.352 [2024-11-20 09:12:41.062074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.352 [2024-11-20 09:12:41.062192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:02.352 [2024-11-20 09:12:41.062247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.843 ms 00:19:02.352 [2024-11-20 09:12:41.062269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.352 [2024-11-20 09:12:41.062548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.352 [2024-11-20 09:12:41.062930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:02.352 [2024-11-20 09:12:41.062956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:19:02.352 [2024-11-20 09:12:41.062966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.352 [2024-11-20 09:12:41.066683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.352 [2024-11-20 09:12:41.066715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:02.352 [2024-11-20 09:12:41.066725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.686 ms 00:19:02.352 [2024-11-20 09:12:41.066733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.352 [2024-11-20 09:12:41.073657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.352 [2024-11-20 09:12:41.073795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:02.352 [2024-11-20 09:12:41.073812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:19:02.352 [2024-11-20 09:12:41.073820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.352 [2024-11-20 09:12:41.098122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.098168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:02.353 [2024-11-20 09:12:41.098180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.225 ms 00:19:02.353 [2024-11-20 09:12:41.098187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.113607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.113648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:02.353 [2024-11-20 09:12:41.113659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:19:02.353 [2024-11-20 09:12:41.113671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.113806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.113818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:02.353 [2024-11-20 09:12:41.113827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:02.353 [2024-11-20 09:12:41.113835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.137864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.137912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:02.353 [2024-11-20 09:12:41.137922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.005 ms 00:19:02.353 [2024-11-20 09:12:41.137929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.161886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.161921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:02.353 [2024-11-20 09:12:41.161931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.920 ms 00:19:02.353 [2024-11-20 09:12:41.161938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.185670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.185708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:02.353 [2024-11-20 09:12:41.185719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.692 ms 00:19:02.353 [2024-11-20 09:12:41.185727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.210311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.353 [2024-11-20 09:12:41.210354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:02.353 [2024-11-20 09:12:41.210366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.508 ms 00:19:02.353 [2024-11-20 09:12:41.210373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.353 [2024-11-20 09:12:41.210419] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:02.353 [2024-11-20 09:12:41.210434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:02.353 [2024-11-20 09:12:41.210978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.210986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.210994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:02.354 [2024-11-20 09:12:41.211265] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:02.354 [2024-11-20 09:12:41.211273] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b0d92c9b-576a-461a-9df9-bb3d9af603a9 00:19:02.354 [2024-11-20 09:12:41.211283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:02.354 [2024-11-20 09:12:41.211291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:02.354 [2024-11-20 09:12:41.211299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:02.354 [2024-11-20 09:12:41.211308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:02.354 [2024-11-20 09:12:41.211315] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:02.354 [2024-11-20 09:12:41.211322] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:02.354 [2024-11-20 09:12:41.211330] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:02.354 [2024-11-20 09:12:41.211337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:02.354 [2024-11-20 09:12:41.211343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:02.354 [2024-11-20 09:12:41.211350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.354 [2024-11-20 09:12:41.211361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:02.354 [2024-11-20 09:12:41.211370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:19:02.354 [2024-11-20 09:12:41.211379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.354 [2024-11-20 09:12:41.225372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.354 [2024-11-20 09:12:41.225411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:02.354 [2024-11-20 09:12:41.225424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.960 ms 00:19:02.354 [2024-11-20 09:12:41.225432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.354 [2024-11-20 09:12:41.225852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.354 [2024-11-20 09:12:41.225907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:02.354 [2024-11-20 09:12:41.225917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:19:02.354 [2024-11-20 09:12:41.225925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.354 [2024-11-20 09:12:41.266642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.354 [2024-11-20 09:12:41.266688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.354 [2024-11-20 09:12:41.266701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.354 [2024-11-20 09:12:41.266710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.354 [2024-11-20 09:12:41.266818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.354 [2024-11-20 09:12:41.266828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.354 [2024-11-20 09:12:41.266837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.354 [2024-11-20 09:12:41.266845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.354 [2024-11-20 09:12:41.266927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.354 [2024-11-20 09:12:41.266939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.354 [2024-11-20 09:12:41.266948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.354 [2024-11-20 09:12:41.266956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.354 [2024-11-20 09:12:41.266974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.354 [2024-11-20 09:12:41.266987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.354 [2024-11-20 09:12:41.266995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.354 [2024-11-20 09:12:41.267003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.615 [2024-11-20 09:12:41.354541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.354825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.616 [2024-11-20 09:12:41.354849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.354859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.426541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.426602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.616 [2024-11-20 09:12:41.426616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.426625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.426720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.426730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.616 [2024-11-20 09:12:41.426740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.426749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.426785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.426796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.616 [2024-11-20 09:12:41.426810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.426819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.426958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.426970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.616 [2024-11-20 09:12:41.426980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.426988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.427026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.427038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:02.616 [2024-11-20 09:12:41.427047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.427059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.427111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.427123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.616 [2024-11-20 09:12:41.427132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.427141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.427198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:02.616 [2024-11-20 09:12:41.427211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.616 [2024-11-20 09:12:41.427224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:02.616 [2024-11-20 09:12:41.427232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.616 [2024-11-20 09:12:41.427417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 368.627 ms, result 0 00:19:03.560 00:19:03.560 00:19:03.560 09:12:42 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:04.132 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:04.132 Process with pid 74487 is not found 00:19:04.132 09:12:42 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74487 00:19:04.132 09:12:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 74487 ']' 00:19:04.132 09:12:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 74487 00:19:04.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74487) - No such process 00:19:04.132 09:12:42 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 74487 is not found' 00:19:04.132 ************************************ 00:19:04.132 END TEST ftl_trim 00:19:04.132 ************************************ 00:19:04.132 00:19:04.132 real 1m30.518s 00:19:04.132 user 1m46.759s 00:19:04.132 sys 0m14.768s 00:19:04.132 09:12:42 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.132 09:12:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:04.132 09:12:42 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:04.132 09:12:42 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:04.132 09:12:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.132 09:12:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:04.132 ************************************ 00:19:04.132 START TEST ftl_restore 00:19:04.132 ************************************ 00:19:04.132 09:12:42 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:04.394 * Looking for test storage... 00:19:04.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.394 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:04.394 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:19:04.394 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:04.394 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:04.394 09:12:43 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:04.394 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:04.394 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:04.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.394 --rc genhtml_branch_coverage=1 00:19:04.394 --rc genhtml_function_coverage=1 00:19:04.394 --rc genhtml_legend=1 00:19:04.394 --rc geninfo_all_blocks=1 00:19:04.395 --rc geninfo_unexecuted_blocks=1 00:19:04.395 00:19:04.395 ' 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.395 --rc genhtml_branch_coverage=1 00:19:04.395 --rc genhtml_function_coverage=1 00:19:04.395 --rc genhtml_legend=1 00:19:04.395 --rc geninfo_all_blocks=1 00:19:04.395 --rc geninfo_unexecuted_blocks=1 00:19:04.395 00:19:04.395 ' 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.395 --rc genhtml_branch_coverage=1 00:19:04.395 --rc genhtml_function_coverage=1 00:19:04.395 --rc genhtml_legend=1 00:19:04.395 --rc geninfo_all_blocks=1 00:19:04.395 --rc geninfo_unexecuted_blocks=1 00:19:04.395 00:19:04.395 ' 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:04.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:04.395 --rc genhtml_branch_coverage=1 00:19:04.395 --rc genhtml_function_coverage=1 00:19:04.395 --rc genhtml_legend=1 00:19:04.395 --rc geninfo_all_blocks=1 00:19:04.395 --rc geninfo_unexecuted_blocks=1 00:19:04.395 00:19:04.395 ' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.4mEpVkFGpF 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74853 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74853 00:19:04.395 09:12:43 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 74853 ']' 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.395 09:12:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:04.395 [2024-11-20 09:12:43.249900] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:04.395 [2024-11-20 09:12:43.250168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74853 ] 00:19:04.657 [2024-11-20 09:12:43.409889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.657 [2024-11-20 09:12:43.524636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:19:05.600 09:12:44 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:05.600 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:05.862 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.862 { 00:19:05.862 "name": "nvme0n1", 00:19:05.862 "aliases": [ 00:19:05.862 "83f7ec2e-873b-456e-9263-d4f2745d03c1" 00:19:05.862 ], 00:19:05.862 "product_name": "NVMe disk", 00:19:05.862 "block_size": 4096, 00:19:05.862 "num_blocks": 1310720, 00:19:05.862 "uuid": "83f7ec2e-873b-456e-9263-d4f2745d03c1", 00:19:05.862 "numa_id": -1, 00:19:05.862 "assigned_rate_limits": { 00:19:05.862 "rw_ios_per_sec": 0, 00:19:05.862 "rw_mbytes_per_sec": 0, 00:19:05.862 "r_mbytes_per_sec": 0, 00:19:05.862 "w_mbytes_per_sec": 0 00:19:05.862 }, 00:19:05.862 "claimed": true, 00:19:05.862 "claim_type": "read_many_write_one", 00:19:05.862 "zoned": false, 00:19:05.862 "supported_io_types": { 00:19:05.862 "read": true, 00:19:05.862 "write": true, 00:19:05.862 "unmap": true, 00:19:05.862 "flush": true, 00:19:05.862 "reset": true, 00:19:05.862 "nvme_admin": true, 00:19:05.862 "nvme_io": true, 00:19:05.862 "nvme_io_md": false, 00:19:05.862 "write_zeroes": true, 00:19:05.862 "zcopy": false, 00:19:05.862 "get_zone_info": false, 00:19:05.862 "zone_management": false, 00:19:05.862 "zone_append": false, 00:19:05.862 "compare": true, 00:19:05.862 "compare_and_write": false, 00:19:05.862 "abort": true, 00:19:05.862 "seek_hole": false, 00:19:05.862 "seek_data": false, 00:19:05.862 "copy": true, 00:19:05.862 "nvme_iov_md": false 00:19:05.862 }, 00:19:05.862 "driver_specific": { 00:19:05.862 "nvme": [ 00:19:05.862 { 00:19:05.862 "pci_address": "0000:00:11.0", 00:19:05.862 "trid": { 00:19:05.862 "trtype": "PCIe", 00:19:05.862 "traddr": "0000:00:11.0" 00:19:05.862 }, 00:19:05.862 "ctrlr_data": { 00:19:05.863 "cntlid": 0, 00:19:05.863 "vendor_id": "0x1b36", 00:19:05.863 "model_number": "QEMU NVMe Ctrl", 00:19:05.863 "serial_number": "12341", 00:19:05.863 "firmware_revision": "8.0.0", 00:19:05.863 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:05.863 "oacs": { 00:19:05.863 "security": 0, 00:19:05.863 "format": 1, 00:19:05.863 "firmware": 0, 00:19:05.863 "ns_manage": 1 00:19:05.863 }, 00:19:05.863 "multi_ctrlr": false, 00:19:05.863 "ana_reporting": false 00:19:05.863 }, 00:19:05.863 "vs": { 00:19:05.863 "nvme_version": "1.4" 00:19:05.863 }, 00:19:05.863 "ns_data": { 00:19:05.863 "id": 1, 00:19:05.863 "can_share": false 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ], 00:19:05.863 "mp_policy": "active_passive" 00:19:05.863 } 00:19:05.863 } 00:19:05.863 ]' 00:19:05.863 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.863 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.863 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.863 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:05.863 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:05.863 09:12:44 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:19:05.863 09:12:44 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:19:05.863 09:12:44 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:05.863 09:12:44 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:19:05.863 09:12:44 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:05.863 09:12:44 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:06.125 09:12:44 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d10c1c75-f9bd-4ca1-8f79-5b130fc8e131 00:19:06.125 09:12:44 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:19:06.125 09:12:44 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d10c1c75-f9bd-4ca1-8f79-5b130fc8e131 00:19:06.386 09:12:45 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:06.648 09:12:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=f9f86486-3523-4211-9f51-14b80baef7e3 00:19:06.648 09:12:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f9f86486-3523-4211-9f51-14b80baef7e3 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=9649f4a7-61c8-400a-b073-1586613c4b34 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=9649f4a7-61c8-400a-b073-1586613c4b34 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:19:06.929 09:12:45 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:06.929 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9649f4a7-61c8-400a-b073-1586613c4b34 00:19:06.929 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:06.929 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:06.929 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:06.929 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.245 { 00:19:07.245 "name": "9649f4a7-61c8-400a-b073-1586613c4b34", 00:19:07.245 "aliases": [ 00:19:07.245 "lvs/nvme0n1p0" 00:19:07.245 ], 00:19:07.245 "product_name": "Logical Volume", 00:19:07.245 "block_size": 4096, 00:19:07.245 "num_blocks": 26476544, 00:19:07.245 "uuid": "9649f4a7-61c8-400a-b073-1586613c4b34", 00:19:07.245 "assigned_rate_limits": { 00:19:07.245 "rw_ios_per_sec": 0, 00:19:07.245 "rw_mbytes_per_sec": 0, 00:19:07.245 "r_mbytes_per_sec": 0, 00:19:07.245 "w_mbytes_per_sec": 0 00:19:07.245 }, 00:19:07.245 "claimed": false, 00:19:07.245 "zoned": false, 00:19:07.245 "supported_io_types": { 00:19:07.245 "read": true, 00:19:07.245 "write": true, 00:19:07.245 "unmap": true, 00:19:07.245 "flush": false, 00:19:07.245 "reset": true, 00:19:07.245 "nvme_admin": false, 00:19:07.245 "nvme_io": false, 00:19:07.245 "nvme_io_md": false, 00:19:07.245 "write_zeroes": true, 00:19:07.245 "zcopy": false, 00:19:07.245 "get_zone_info": false, 00:19:07.245 "zone_management": false, 00:19:07.245 "zone_append": false, 00:19:07.245 "compare": false, 00:19:07.245 "compare_and_write": false, 00:19:07.245 "abort": false, 00:19:07.245 "seek_hole": true, 00:19:07.245 "seek_data": true, 00:19:07.245 "copy": false, 00:19:07.245 "nvme_iov_md": false 00:19:07.245 }, 00:19:07.245 "driver_specific": { 00:19:07.245 "lvol": { 00:19:07.245 "lvol_store_uuid": "f9f86486-3523-4211-9f51-14b80baef7e3", 00:19:07.245 "base_bdev": "nvme0n1", 00:19:07.245 "thin_provision": true, 00:19:07.245 "num_allocated_clusters": 0, 00:19:07.245 "snapshot": false, 00:19:07.245 "clone": false, 00:19:07.245 "esnap_clone": false 00:19:07.245 } 00:19:07.245 } 00:19:07.245 } 00:19:07.245 ]' 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:07.245 09:12:45 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:07.245 09:12:45 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:19:07.245 09:12:45 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:19:07.245 09:12:45 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:07.506 09:12:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:07.506 09:12:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:07.506 09:12:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:07.506 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9649f4a7-61c8-400a-b073-1586613c4b34 00:19:07.506 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.506 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:07.506 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:07.506 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:07.767 { 00:19:07.767 "name": "9649f4a7-61c8-400a-b073-1586613c4b34", 00:19:07.767 "aliases": [ 00:19:07.767 "lvs/nvme0n1p0" 00:19:07.767 ], 00:19:07.767 "product_name": "Logical Volume", 00:19:07.767 "block_size": 4096, 00:19:07.767 "num_blocks": 26476544, 00:19:07.767 "uuid": "9649f4a7-61c8-400a-b073-1586613c4b34", 00:19:07.767 "assigned_rate_limits": { 00:19:07.767 "rw_ios_per_sec": 0, 00:19:07.767 "rw_mbytes_per_sec": 0, 00:19:07.767 "r_mbytes_per_sec": 0, 00:19:07.767 "w_mbytes_per_sec": 0 00:19:07.767 }, 00:19:07.767 "claimed": false, 00:19:07.767 "zoned": false, 00:19:07.767 "supported_io_types": { 00:19:07.767 "read": true, 00:19:07.767 "write": true, 00:19:07.767 "unmap": true, 00:19:07.767 "flush": false, 00:19:07.767 "reset": true, 00:19:07.767 "nvme_admin": false, 00:19:07.767 "nvme_io": false, 00:19:07.767 "nvme_io_md": false, 00:19:07.767 "write_zeroes": true, 00:19:07.767 "zcopy": false, 00:19:07.767 "get_zone_info": false, 00:19:07.767 "zone_management": false, 00:19:07.767 "zone_append": false, 00:19:07.767 "compare": false, 00:19:07.767 "compare_and_write": false, 00:19:07.767 "abort": false, 00:19:07.767 "seek_hole": true, 00:19:07.767 "seek_data": true, 00:19:07.767 "copy": false, 00:19:07.767 "nvme_iov_md": false 00:19:07.767 }, 00:19:07.767 "driver_specific": { 00:19:07.767 "lvol": { 00:19:07.767 "lvol_store_uuid": "f9f86486-3523-4211-9f51-14b80baef7e3", 00:19:07.767 "base_bdev": "nvme0n1", 00:19:07.767 "thin_provision": true, 00:19:07.767 "num_allocated_clusters": 0, 00:19:07.767 "snapshot": false, 00:19:07.767 "clone": false, 00:19:07.767 "esnap_clone": false 00:19:07.767 } 00:19:07.767 } 00:19:07.767 } 00:19:07.767 ]' 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:07.767 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:07.767 09:12:46 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:19:07.767 09:12:46 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:08.029 09:12:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:08.029 09:12:46 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=9649f4a7-61c8-400a-b073-1586613c4b34 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9649f4a7-61c8-400a-b073-1586613c4b34 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.029 { 00:19:08.029 "name": "9649f4a7-61c8-400a-b073-1586613c4b34", 00:19:08.029 "aliases": [ 00:19:08.029 "lvs/nvme0n1p0" 00:19:08.029 ], 00:19:08.029 "product_name": "Logical Volume", 00:19:08.029 "block_size": 4096, 00:19:08.029 "num_blocks": 26476544, 00:19:08.029 "uuid": "9649f4a7-61c8-400a-b073-1586613c4b34", 00:19:08.029 "assigned_rate_limits": { 00:19:08.029 "rw_ios_per_sec": 0, 00:19:08.029 "rw_mbytes_per_sec": 0, 00:19:08.029 "r_mbytes_per_sec": 0, 00:19:08.029 "w_mbytes_per_sec": 0 00:19:08.029 }, 00:19:08.029 "claimed": false, 00:19:08.029 "zoned": false, 00:19:08.029 "supported_io_types": { 00:19:08.029 "read": true, 00:19:08.029 "write": true, 00:19:08.029 "unmap": true, 00:19:08.029 "flush": false, 00:19:08.029 "reset": true, 00:19:08.029 "nvme_admin": false, 00:19:08.029 "nvme_io": false, 00:19:08.029 "nvme_io_md": false, 00:19:08.029 "write_zeroes": true, 00:19:08.029 "zcopy": false, 00:19:08.029 "get_zone_info": false, 00:19:08.029 "zone_management": false, 00:19:08.029 "zone_append": false, 00:19:08.029 "compare": false, 00:19:08.029 "compare_and_write": false, 00:19:08.029 "abort": false, 00:19:08.029 "seek_hole": true, 00:19:08.029 "seek_data": true, 00:19:08.029 "copy": false, 00:19:08.029 "nvme_iov_md": false 00:19:08.029 }, 00:19:08.029 "driver_specific": { 00:19:08.029 "lvol": { 00:19:08.029 "lvol_store_uuid": "f9f86486-3523-4211-9f51-14b80baef7e3", 00:19:08.029 "base_bdev": "nvme0n1", 00:19:08.029 "thin_provision": true, 00:19:08.029 "num_allocated_clusters": 0, 00:19:08.029 "snapshot": false, 00:19:08.029 "clone": false, 00:19:08.029 "esnap_clone": false 00:19:08.029 } 00:19:08.029 } 00:19:08.029 } 00:19:08.029 ]' 00:19:08.029 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.292 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.292 09:12:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.292 09:12:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:08.292 09:12:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:08.292 09:12:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9649f4a7-61c8-400a-b073-1586613c4b34 --l2p_dram_limit 10' 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:08.292 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:08.292 09:12:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9649f4a7-61c8-400a-b073-1586613c4b34 --l2p_dram_limit 10 -c nvc0n1p0 00:19:08.292 [2024-11-20 09:12:47.192303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.192344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:08.292 [2024-11-20 09:12:47.192357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:08.292 [2024-11-20 09:12:47.192365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.192406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.192414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:08.292 [2024-11-20 09:12:47.192422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:08.292 [2024-11-20 09:12:47.192428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.192447] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:08.292 [2024-11-20 09:12:47.194662] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:08.292 [2024-11-20 09:12:47.194695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.194702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:08.292 [2024-11-20 09:12:47.194710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.250 ms 00:19:08.292 [2024-11-20 09:12:47.194716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.194744] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ace07a7f-bff5-45b2-a4fb-6c01762c9936 00:19:08.292 [2024-11-20 09:12:47.196049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.196078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:08.292 [2024-11-20 09:12:47.196087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:08.292 [2024-11-20 09:12:47.196096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.202971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.202996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:08.292 [2024-11-20 09:12:47.203005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.806 ms 00:19:08.292 [2024-11-20 09:12:47.203013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.203083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.203208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:08.292 [2024-11-20 09:12:47.203216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:08.292 [2024-11-20 09:12:47.203226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.203257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.203268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:08.292 [2024-11-20 09:12:47.203274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:08.292 [2024-11-20 09:12:47.203283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.203301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:08.292 [2024-11-20 09:12:47.206541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.206566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:08.292 [2024-11-20 09:12:47.206576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:19:08.292 [2024-11-20 09:12:47.206582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.206612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.206618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:08.292 [2024-11-20 09:12:47.206626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:08.292 [2024-11-20 09:12:47.206632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.206651] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:08.292 [2024-11-20 09:12:47.206761] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:08.292 [2024-11-20 09:12:47.206775] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:08.292 [2024-11-20 09:12:47.206784] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:08.292 [2024-11-20 09:12:47.206794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:08.292 [2024-11-20 09:12:47.206801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:08.292 [2024-11-20 09:12:47.206809] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:08.292 [2024-11-20 09:12:47.206815] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:08.292 [2024-11-20 09:12:47.206825] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:08.292 [2024-11-20 09:12:47.206830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:08.292 [2024-11-20 09:12:47.206838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.206843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:08.292 [2024-11-20 09:12:47.206851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:19:08.292 [2024-11-20 09:12:47.206861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.206940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.292 [2024-11-20 09:12:47.206948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:08.292 [2024-11-20 09:12:47.206956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:08.292 [2024-11-20 09:12:47.206962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.292 [2024-11-20 09:12:47.207043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:08.292 [2024-11-20 09:12:47.207051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:08.292 [2024-11-20 09:12:47.207059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.292 [2024-11-20 09:12:47.207065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.292 [2024-11-20 09:12:47.207072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:08.292 [2024-11-20 09:12:47.207078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:08.292 [2024-11-20 09:12:47.207086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:08.292 [2024-11-20 09:12:47.207092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:08.292 [2024-11-20 09:12:47.207099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:08.292 [2024-11-20 09:12:47.207104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.292 [2024-11-20 09:12:47.207111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:08.292 [2024-11-20 09:12:47.207118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:08.292 [2024-11-20 09:12:47.207125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:08.292 [2024-11-20 09:12:47.207130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:08.292 [2024-11-20 09:12:47.207137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:08.292 [2024-11-20 09:12:47.207142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.292 [2024-11-20 09:12:47.207155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:08.293 [2024-11-20 09:12:47.207161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:08.293 [2024-11-20 09:12:47.207180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:08.293 [2024-11-20 09:12:47.207197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:08.293 [2024-11-20 09:12:47.207215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:08.293 [2024-11-20 09:12:47.207233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:08.293 [2024-11-20 09:12:47.207252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.293 [2024-11-20 09:12:47.207265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:08.293 [2024-11-20 09:12:47.207270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:08.293 [2024-11-20 09:12:47.207276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:08.293 [2024-11-20 09:12:47.207287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:08.293 [2024-11-20 09:12:47.207294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:08.293 [2024-11-20 09:12:47.207299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:08.293 [2024-11-20 09:12:47.207311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:08.293 [2024-11-20 09:12:47.207318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207322] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:08.293 [2024-11-20 09:12:47.207330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:08.293 [2024-11-20 09:12:47.207336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:08.293 [2024-11-20 09:12:47.207350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:08.293 [2024-11-20 09:12:47.207359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:08.293 [2024-11-20 09:12:47.207364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:08.293 [2024-11-20 09:12:47.207371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:08.293 [2024-11-20 09:12:47.207377] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:08.293 [2024-11-20 09:12:47.207384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:08.293 [2024-11-20 09:12:47.207393] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:08.293 [2024-11-20 09:12:47.207402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:08.293 [2024-11-20 09:12:47.207418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:08.293 [2024-11-20 09:12:47.207424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:08.293 [2024-11-20 09:12:47.207431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:08.293 [2024-11-20 09:12:47.207437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:08.293 [2024-11-20 09:12:47.207443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:08.293 [2024-11-20 09:12:47.207448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:08.293 [2024-11-20 09:12:47.207455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:08.293 [2024-11-20 09:12:47.207460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:08.293 [2024-11-20 09:12:47.207469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:08.293 [2024-11-20 09:12:47.207500] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:08.293 [2024-11-20 09:12:47.207508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:08.293 [2024-11-20 09:12:47.207521] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:08.293 [2024-11-20 09:12:47.207526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:08.293 [2024-11-20 09:12:47.207534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:08.293 [2024-11-20 09:12:47.207540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:08.293 [2024-11-20 09:12:47.207547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:08.293 [2024-11-20 09:12:47.207552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:19:08.293 [2024-11-20 09:12:47.207559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:08.293 [2024-11-20 09:12:47.207600] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:08.293 [2024-11-20 09:12:47.207613] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:12.505 [2024-11-20 09:12:51.007714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.505 [2024-11-20 09:12:51.007770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:12.505 [2024-11-20 09:12:51.007784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3800.097 ms 00:19:12.505 [2024-11-20 09:12:51.007793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.505 [2024-11-20 09:12:51.031256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.505 [2024-11-20 09:12:51.031303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.505 [2024-11-20 09:12:51.031316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.281 ms 00:19:12.505 [2024-11-20 09:12:51.031326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.505 [2024-11-20 09:12:51.031418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.505 [2024-11-20 09:12:51.031429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:12.505 [2024-11-20 09:12:51.031437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:12.505 [2024-11-20 09:12:51.031448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.505 [2024-11-20 09:12:51.058242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.505 [2024-11-20 09:12:51.058277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.505 [2024-11-20 09:12:51.058286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.747 ms 00:19:12.505 [2024-11-20 09:12:51.058295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.505 [2024-11-20 09:12:51.058318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.505 [2024-11-20 09:12:51.058330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:12.505 [2024-11-20 09:12:51.058337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:12.505 [2024-11-20 09:12:51.058345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.505 [2024-11-20 09:12:51.058752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.058770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:12.506 [2024-11-20 09:12:51.058778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:19:12.506 [2024-11-20 09:12:51.058786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.058868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.058894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:12.506 [2024-11-20 09:12:51.058904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:12.506 [2024-11-20 09:12:51.058914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.072029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.072058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:12.506 [2024-11-20 09:12:51.072066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.100 ms 00:19:12.506 [2024-11-20 09:12:51.072074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.081971] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:12.506 [2024-11-20 09:12:51.084930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.085065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:12.506 [2024-11-20 09:12:51.085081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.797 ms 00:19:12.506 [2024-11-20 09:12:51.085087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.174419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.174450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:12.506 [2024-11-20 09:12:51.174463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.308 ms 00:19:12.506 [2024-11-20 09:12:51.174470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.174618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.174629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:12.506 [2024-11-20 09:12:51.174640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:19:12.506 [2024-11-20 09:12:51.174647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.193519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.193639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:12.506 [2024-11-20 09:12:51.193656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.844 ms 00:19:12.506 [2024-11-20 09:12:51.193663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.212093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.212196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:12.506 [2024-11-20 09:12:51.212213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.397 ms 00:19:12.506 [2024-11-20 09:12:51.212220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.212659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.212669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:12.506 [2024-11-20 09:12:51.212811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:19:12.506 [2024-11-20 09:12:51.212819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.278468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.278496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:12.506 [2024-11-20 09:12:51.278509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.620 ms 00:19:12.506 [2024-11-20 09:12:51.278516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.298710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.298822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:12.506 [2024-11-20 09:12:51.298839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.138 ms 00:19:12.506 [2024-11-20 09:12:51.298845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.317514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.317612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:12.506 [2024-11-20 09:12:51.317628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.629 ms 00:19:12.506 [2024-11-20 09:12:51.317634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.336746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.336837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:12.506 [2024-11-20 09:12:51.336852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.084 ms 00:19:12.506 [2024-11-20 09:12:51.336858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.336896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.336904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:12.506 [2024-11-20 09:12:51.336915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:12.506 [2024-11-20 09:12:51.336921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.336994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:12.506 [2024-11-20 09:12:51.337003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:12.506 [2024-11-20 09:12:51.337013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:12.506 [2024-11-20 09:12:51.337019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.506 [2024-11-20 09:12:51.337823] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4145.134 ms, result 0 00:19:12.506 { 00:19:12.506 "name": "ftl0", 00:19:12.506 "uuid": "ace07a7f-bff5-45b2-a4fb-6c01762c9936" 00:19:12.506 } 00:19:12.506 09:12:51 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:12.506 09:12:51 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:12.768 09:12:51 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:19:12.768 09:12:51 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:13.032 [2024-11-20 09:12:51.757367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.757400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:13.032 [2024-11-20 09:12:51.757409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:13.032 [2024-11-20 09:12:51.757421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.757440] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.032 [2024-11-20 09:12:51.759679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.759699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:13.032 [2024-11-20 09:12:51.759709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.224 ms 00:19:13.032 [2024-11-20 09:12:51.759716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.759933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.759942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:13.032 [2024-11-20 09:12:51.759953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:19:13.032 [2024-11-20 09:12:51.759959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.762412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.762425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:13.032 [2024-11-20 09:12:51.762433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.438 ms 00:19:13.032 [2024-11-20 09:12:51.762439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.767141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.767158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:13.032 [2024-11-20 09:12:51.767169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.687 ms 00:19:13.032 [2024-11-20 09:12:51.767175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.785275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.785297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:13.032 [2024-11-20 09:12:51.785307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.059 ms 00:19:13.032 [2024-11-20 09:12:51.785313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.798917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.798942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:13.032 [2024-11-20 09:12:51.798953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.571 ms 00:19:13.032 [2024-11-20 09:12:51.798959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.799073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.799082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:13.032 [2024-11-20 09:12:51.799092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:19:13.032 [2024-11-20 09:12:51.799099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.817308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.817330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:13.032 [2024-11-20 09:12:51.817340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.193 ms 00:19:13.032 [2024-11-20 09:12:51.817346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.032 [2024-11-20 09:12:51.835537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.032 [2024-11-20 09:12:51.835558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:13.032 [2024-11-20 09:12:51.835568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:19:13.032 [2024-11-20 09:12:51.835574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-11-20 09:12:51.852994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.033 [2024-11-20 09:12:51.853016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:13.033 [2024-11-20 09:12:51.853025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.388 ms 00:19:13.033 [2024-11-20 09:12:51.853031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-11-20 09:12:51.870506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.033 [2024-11-20 09:12:51.870527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:13.033 [2024-11-20 09:12:51.870537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.421 ms 00:19:13.033 [2024-11-20 09:12:51.870542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.033 [2024-11-20 09:12:51.870570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:13.033 [2024-11-20 09:12:51.870581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.870995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:13.033 [2024-11-20 09:12:51.871153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:13.034 [2024-11-20 09:12:51.871287] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:13.034 [2024-11-20 09:12:51.871297] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ace07a7f-bff5-45b2-a4fb-6c01762c9936 00:19:13.034 [2024-11-20 09:12:51.871303] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:13.034 [2024-11-20 09:12:51.871312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:13.034 [2024-11-20 09:12:51.871318] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:13.034 [2024-11-20 09:12:51.871328] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:13.034 [2024-11-20 09:12:51.871334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:13.034 [2024-11-20 09:12:51.871341] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:13.034 [2024-11-20 09:12:51.871347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:13.034 [2024-11-20 09:12:51.871354] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:13.034 [2024-11-20 09:12:51.871360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:13.034 [2024-11-20 09:12:51.871367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.034 [2024-11-20 09:12:51.871373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:13.034 [2024-11-20 09:12:51.871381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.798 ms 00:19:13.034 [2024-11-20 09:12:51.871386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.034 [2024-11-20 09:12:51.881236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.034 [2024-11-20 09:12:51.881255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.034 [2024-11-20 09:12:51.881265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.822 ms 00:19:13.034 [2024-11-20 09:12:51.881271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.034 [2024-11-20 09:12:51.881540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.034 [2024-11-20 09:12:51.881548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.034 [2024-11-20 09:12:51.881556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:13.034 [2024-11-20 09:12:51.881563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.034 [2024-11-20 09:12:51.916516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.034 [2024-11-20 09:12:51.916539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.034 [2024-11-20 09:12:51.916550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.034 [2024-11-20 09:12:51.916557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.034 [2024-11-20 09:12:51.916603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.034 [2024-11-20 09:12:51.916610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.034 [2024-11-20 09:12:51.916618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.034 [2024-11-20 09:12:51.916625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.034 [2024-11-20 09:12:51.916681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.034 [2024-11-20 09:12:51.916689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.034 [2024-11-20 09:12:51.916698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.034 [2024-11-20 09:12:51.916703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.034 [2024-11-20 09:12:51.916729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.034 [2024-11-20 09:12:51.916735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.034 [2024-11-20 09:12:51.916742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.034 [2024-11-20 09:12:51.916748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:51.979279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:51.979310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.297 [2024-11-20 09:12:51.979323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:51.979329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:13.297 [2024-11-20 09:12:52.029489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:13.297 [2024-11-20 09:12:52.029601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:13.297 [2024-11-20 09:12:52.029665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:13.297 [2024-11-20 09:12:52.029768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:13.297 [2024-11-20 09:12:52.029817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:13.297 [2024-11-20 09:12:52.029890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.029943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.297 [2024-11-20 09:12:52.029951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:13.297 [2024-11-20 09:12:52.029960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.297 [2024-11-20 09:12:52.029966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.297 [2024-11-20 09:12:52.030086] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 272.676 ms, result 0 00:19:13.297 true 00:19:13.297 09:12:52 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74853 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74853 ']' 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74853 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74853 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.297 killing process with pid 74853 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74853' 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 74853 00:19:13.297 09:12:52 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 74853 00:19:18.594 09:12:57 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:23.890 262144+0 records in 00:19:23.890 262144+0 records out 00:19:23.890 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.34418 s, 247 MB/s 00:19:23.890 09:13:01 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:25.275 09:13:03 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:25.275 [2024-11-20 09:13:04.030773] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:19:25.275 [2024-11-20 09:13:04.030915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75088 ] 00:19:25.275 [2024-11-20 09:13:04.189555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.535 [2024-11-20 09:13:04.287080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.798 [2024-11-20 09:13:04.514090] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.798 [2024-11-20 09:13:04.514141] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:25.798 [2024-11-20 09:13:04.671655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.671690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:25.798 [2024-11-20 09:13:04.671705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:25.798 [2024-11-20 09:13:04.671712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.671747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.671755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:25.798 [2024-11-20 09:13:04.671764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:25.798 [2024-11-20 09:13:04.671770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.671783] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:25.798 [2024-11-20 09:13:04.672302] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:25.798 [2024-11-20 09:13:04.672315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.672322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:25.798 [2024-11-20 09:13:04.672329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:19:25.798 [2024-11-20 09:13:04.672335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.673583] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:25.798 [2024-11-20 09:13:04.684002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.684032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:25.798 [2024-11-20 09:13:04.684043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.419 ms 00:19:25.798 [2024-11-20 09:13:04.684049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.684101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.684109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:25.798 [2024-11-20 09:13:04.684116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:25.798 [2024-11-20 09:13:04.684122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.690475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.690498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:25.798 [2024-11-20 09:13:04.690505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:19:25.798 [2024-11-20 09:13:04.690511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.690569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.690576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:25.798 [2024-11-20 09:13:04.690582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:25.798 [2024-11-20 09:13:04.690588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.690631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.690640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:25.798 [2024-11-20 09:13:04.690646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:25.798 [2024-11-20 09:13:04.690653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.690668] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:25.798 [2024-11-20 09:13:04.693727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.693746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:25.798 [2024-11-20 09:13:04.693754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.063 ms 00:19:25.798 [2024-11-20 09:13:04.693761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.693789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.798 [2024-11-20 09:13:04.693796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:25.798 [2024-11-20 09:13:04.693802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:25.798 [2024-11-20 09:13:04.693808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.798 [2024-11-20 09:13:04.693823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:25.798 [2024-11-20 09:13:04.693839] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:25.798 [2024-11-20 09:13:04.693868] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:25.798 [2024-11-20 09:13:04.693892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:25.798 [2024-11-20 09:13:04.693978] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:25.798 [2024-11-20 09:13:04.693987] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:25.798 [2024-11-20 09:13:04.693996] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:25.798 [2024-11-20 09:13:04.694004] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:25.798 [2024-11-20 09:13:04.694011] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:25.798 [2024-11-20 09:13:04.694017] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:25.798 [2024-11-20 09:13:04.694023] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:25.798 [2024-11-20 09:13:04.694029] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:25.798 [2024-11-20 09:13:04.694035] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:25.798 [2024-11-20 09:13:04.694043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.799 [2024-11-20 09:13:04.694049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:25.799 [2024-11-20 09:13:04.694056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:19:25.799 [2024-11-20 09:13:04.694062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.799 [2024-11-20 09:13:04.694125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.799 [2024-11-20 09:13:04.694131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:25.799 [2024-11-20 09:13:04.694138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:25.799 [2024-11-20 09:13:04.694143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:25.799 [2024-11-20 09:13:04.694220] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:25.799 [2024-11-20 09:13:04.694236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:25.799 [2024-11-20 09:13:04.694243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:25.799 [2024-11-20 09:13:04.694263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:25.799 [2024-11-20 09:13:04.694279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.799 [2024-11-20 09:13:04.694290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:25.799 [2024-11-20 09:13:04.694295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:25.799 [2024-11-20 09:13:04.694301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:25.799 [2024-11-20 09:13:04.694306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:25.799 [2024-11-20 09:13:04.694311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:25.799 [2024-11-20 09:13:04.694320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:25.799 [2024-11-20 09:13:04.694331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:25.799 [2024-11-20 09:13:04.694347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:25.799 [2024-11-20 09:13:04.694363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:25.799 [2024-11-20 09:13:04.694379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694389] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:25.799 [2024-11-20 09:13:04.694394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:25.799 [2024-11-20 09:13:04.694410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.799 [2024-11-20 09:13:04.694421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:25.799 [2024-11-20 09:13:04.694426] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:25.799 [2024-11-20 09:13:04.694431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:25.799 [2024-11-20 09:13:04.694437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:25.799 [2024-11-20 09:13:04.694443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:25.799 [2024-11-20 09:13:04.694448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:25.799 [2024-11-20 09:13:04.694458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:25.799 [2024-11-20 09:13:04.694464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694469] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:25.799 [2024-11-20 09:13:04.694475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:25.799 [2024-11-20 09:13:04.694481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:25.799 [2024-11-20 09:13:04.694493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:25.799 [2024-11-20 09:13:04.694498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:25.799 [2024-11-20 09:13:04.694503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:25.799 [2024-11-20 09:13:04.694508] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:25.799 [2024-11-20 09:13:04.694513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:25.799 [2024-11-20 09:13:04.694518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:25.799 [2024-11-20 09:13:04.694526] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:25.799 [2024-11-20 09:13:04.694534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:25.799 [2024-11-20 09:13:04.694545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:25.799 [2024-11-20 09:13:04.694550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:25.799 [2024-11-20 09:13:04.694557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:25.799 [2024-11-20 09:13:04.694563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:25.799 [2024-11-20 09:13:04.694569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:25.799 [2024-11-20 09:13:04.694574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:25.799 [2024-11-20 09:13:04.694580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:25.799 [2024-11-20 09:13:04.694585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:25.799 [2024-11-20 09:13:04.694591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:25.799 [2024-11-20 09:13:04.694620] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:25.799 [2024-11-20 09:13:04.694628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694634] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:25.799 [2024-11-20 09:13:04.694640] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:25.799 [2024-11-20 09:13:04.694645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:25.799 [2024-11-20 09:13:04.694651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:25.799 [2024-11-20 09:13:04.694657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:25.799 [2024-11-20 09:13:04.694663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:25.799 [2024-11-20 09:13:04.694669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.490 ms 00:19:25.799 [2024-11-20 09:13:04.694674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.718854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.718887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:26.062 [2024-11-20 09:13:04.718895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.139 ms 00:19:26.062 [2024-11-20 09:13:04.718902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.718967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.718974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:26.062 [2024-11-20 09:13:04.718980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:26.062 [2024-11-20 09:13:04.718986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.766952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.766980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:26.062 [2024-11-20 09:13:04.766990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.925 ms 00:19:26.062 [2024-11-20 09:13:04.766997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.767028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.767036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:26.062 [2024-11-20 09:13:04.767043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:26.062 [2024-11-20 09:13:04.767052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.767475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.767496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:26.062 [2024-11-20 09:13:04.767504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:19:26.062 [2024-11-20 09:13:04.767510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.767625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.767634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:26.062 [2024-11-20 09:13:04.767640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:26.062 [2024-11-20 09:13:04.767649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.779527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.779550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:26.062 [2024-11-20 09:13:04.779560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.862 ms 00:19:26.062 [2024-11-20 09:13:04.779566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.790349] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:26.062 [2024-11-20 09:13:04.790374] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:26.062 [2024-11-20 09:13:04.790383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.790390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:26.062 [2024-11-20 09:13:04.790398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.743 ms 00:19:26.062 [2024-11-20 09:13:04.790404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.809014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.809050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:26.062 [2024-11-20 09:13:04.809062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.579 ms 00:19:26.062 [2024-11-20 09:13:04.809068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.818513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.818542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:26.062 [2024-11-20 09:13:04.818550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.414 ms 00:19:26.062 [2024-11-20 09:13:04.818556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.062 [2024-11-20 09:13:04.827501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.062 [2024-11-20 09:13:04.827523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:26.062 [2024-11-20 09:13:04.827531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.919 ms 00:19:26.063 [2024-11-20 09:13:04.827537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.828019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.828036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:26.063 [2024-11-20 09:13:04.828044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:19:26.063 [2024-11-20 09:13:04.828050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.876054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.876083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:26.063 [2024-11-20 09:13:04.876093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.988 ms 00:19:26.063 [2024-11-20 09:13:04.876104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.884543] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:26.063 [2024-11-20 09:13:04.886784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.886804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:26.063 [2024-11-20 09:13:04.886814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.650 ms 00:19:26.063 [2024-11-20 09:13:04.886821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.886924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.886934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:26.063 [2024-11-20 09:13:04.886941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:26.063 [2024-11-20 09:13:04.886947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.887005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.887013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:26.063 [2024-11-20 09:13:04.887020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:26.063 [2024-11-20 09:13:04.887027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.887044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.887051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:26.063 [2024-11-20 09:13:04.887058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:26.063 [2024-11-20 09:13:04.887064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.887092] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:26.063 [2024-11-20 09:13:04.887100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.887108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:26.063 [2024-11-20 09:13:04.887115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:26.063 [2024-11-20 09:13:04.887121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.905644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.905667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:26.063 [2024-11-20 09:13:04.905676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:19:26.063 [2024-11-20 09:13:04.905683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.905742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:26.063 [2024-11-20 09:13:04.905750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:26.063 [2024-11-20 09:13:04.905757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:26.063 [2024-11-20 09:13:04.905763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:26.063 [2024-11-20 09:13:04.906611] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 234.582 ms, result 0 00:19:27.005  [2024-11-20T09:13:06.929Z] Copying: 9712/1048576 [kB] (9712 kBps) [2024-11-20T09:13:08.316Z] Copying: 19616/1048576 [kB] (9904 kBps) [2024-11-20T09:13:09.257Z] Copying: 29248/1048576 [kB] (9632 kBps) [2024-11-20T09:13:10.200Z] Copying: 38592/1048576 [kB] (9344 kBps) [2024-11-20T09:13:11.145Z] Copying: 47976/1048576 [kB] (9384 kBps) [2024-11-20T09:13:12.090Z] Copying: 56904/1048576 [kB] (8928 kBps) [2024-11-20T09:13:13.032Z] Copying: 65872/1048576 [kB] (8968 kBps) [2024-11-20T09:13:13.977Z] Copying: 75480/1048576 [kB] (9608 kBps) [2024-11-20T09:13:14.922Z] Copying: 85144/1048576 [kB] (9664 kBps) [2024-11-20T09:13:16.308Z] Copying: 94584/1048576 [kB] (9440 kBps) [2024-11-20T09:13:17.252Z] Copying: 104600/1048576 [kB] (10016 kBps) [2024-11-20T09:13:18.198Z] Copying: 114572/1048576 [kB] (9972 kBps) [2024-11-20T09:13:19.146Z] Copying: 122/1024 [MB] (10 MBps) [2024-11-20T09:13:20.090Z] Copying: 135128/1048576 [kB] (9844 kBps) [2024-11-20T09:13:21.037Z] Copying: 145168/1048576 [kB] (10040 kBps) [2024-11-20T09:13:21.983Z] Copying: 153/1024 [MB] (12 MBps) [2024-11-20T09:13:22.930Z] Copying: 166264/1048576 [kB] (8740 kBps) [2024-11-20T09:13:24.319Z] Copying: 174996/1048576 [kB] (8732 kBps) [2024-11-20T09:13:25.265Z] Copying: 183772/1048576 [kB] (8776 kBps) [2024-11-20T09:13:26.211Z] Copying: 192616/1048576 [kB] (8844 kBps) [2024-11-20T09:13:27.156Z] Copying: 201368/1048576 [kB] (8752 kBps) [2024-11-20T09:13:28.108Z] Copying: 211224/1048576 [kB] (9856 kBps) [2024-11-20T09:13:29.054Z] Copying: 221328/1048576 [kB] (10104 kBps) [2024-11-20T09:13:29.999Z] Copying: 230504/1048576 [kB] (9176 kBps) [2024-11-20T09:13:30.945Z] Copying: 240196/1048576 [kB] (9692 kBps) [2024-11-20T09:13:32.333Z] Copying: 249768/1048576 [kB] (9572 kBps) [2024-11-20T09:13:33.278Z] Copying: 259280/1048576 [kB] (9512 kBps) [2024-11-20T09:13:34.223Z] Copying: 263/1024 [MB] (10 MBps) [2024-11-20T09:13:35.168Z] Copying: 279756/1048576 [kB] (10108 kBps) [2024-11-20T09:13:36.114Z] Copying: 285/1024 [MB] (11 MBps) [2024-11-20T09:13:37.059Z] Copying: 301636/1048576 [kB] (9772 kBps) [2024-11-20T09:13:38.001Z] Copying: 311792/1048576 [kB] (10156 kBps) [2024-11-20T09:13:38.945Z] Copying: 314/1024 [MB] (10 MBps) [2024-11-20T09:13:40.332Z] Copying: 332200/1048576 [kB] (10088 kBps) [2024-11-20T09:13:41.274Z] Copying: 342076/1048576 [kB] (9876 kBps) [2024-11-20T09:13:42.214Z] Copying: 351908/1048576 [kB] (9832 kBps) [2024-11-20T09:13:43.157Z] Copying: 361696/1048576 [kB] (9788 kBps) [2024-11-20T09:13:44.099Z] Copying: 371820/1048576 [kB] (10124 kBps) [2024-11-20T09:13:45.070Z] Copying: 374/1024 [MB] (11 MBps) [2024-11-20T09:13:46.013Z] Copying: 385/1024 [MB] (10 MBps) [2024-11-20T09:13:46.957Z] Copying: 403952/1048576 [kB] (9656 kBps) [2024-11-20T09:13:48.346Z] Copying: 413152/1048576 [kB] (9200 kBps) [2024-11-20T09:13:49.291Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-20T09:13:50.235Z] Copying: 428/1024 [MB] (13 MBps) [2024-11-20T09:13:51.180Z] Copying: 442/1024 [MB] (14 MBps) [2024-11-20T09:13:52.125Z] Copying: 453/1024 [MB] (10 MBps) [2024-11-20T09:13:53.069Z] Copying: 463/1024 [MB] (10 MBps) [2024-11-20T09:13:54.011Z] Copying: 485108/1048576 [kB] (10212 kBps) [2024-11-20T09:13:54.955Z] Copying: 484/1024 [MB] (10 MBps) [2024-11-20T09:13:56.340Z] Copying: 506032/1048576 [kB] (10104 kBps) [2024-11-20T09:13:57.282Z] Copying: 515944/1048576 [kB] (9912 kBps) [2024-11-20T09:13:58.224Z] Copying: 513/1024 [MB] (10 MBps) [2024-11-20T09:13:59.165Z] Copying: 535840/1048576 [kB] (9596 kBps) [2024-11-20T09:14:00.116Z] Copying: 533/1024 [MB] (10 MBps) [2024-11-20T09:14:01.061Z] Copying: 555884/1048576 [kB] (9532 kBps) [2024-11-20T09:14:02.002Z] Copying: 564496/1048576 [kB] (8612 kBps) [2024-11-20T09:14:02.945Z] Copying: 573768/1048576 [kB] (9272 kBps) [2024-11-20T09:14:03.937Z] Copying: 570/1024 [MB] (10 MBps) [2024-11-20T09:14:05.323Z] Copying: 593720/1048576 [kB] (9664 kBps) [2024-11-20T09:14:06.266Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-20T09:14:07.209Z] Copying: 614064/1048576 [kB] (9144 kBps) [2024-11-20T09:14:08.151Z] Copying: 623648/1048576 [kB] (9584 kBps) [2024-11-20T09:14:09.094Z] Copying: 633748/1048576 [kB] (10100 kBps) [2024-11-20T09:14:10.039Z] Copying: 643048/1048576 [kB] (9300 kBps) [2024-11-20T09:14:11.049Z] Copying: 653040/1048576 [kB] (9992 kBps) [2024-11-20T09:14:11.992Z] Copying: 650/1024 [MB] (12 MBps) [2024-11-20T09:14:12.936Z] Copying: 671/1024 [MB] (20 MBps) [2024-11-20T09:14:14.323Z] Copying: 683/1024 [MB] (12 MBps) [2024-11-20T09:14:15.266Z] Copying: 694/1024 [MB] (10 MBps) [2024-11-20T09:14:16.263Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-20T09:14:17.206Z] Copying: 731568/1048576 [kB] (10144 kBps) [2024-11-20T09:14:18.148Z] Copying: 725/1024 [MB] (10 MBps) [2024-11-20T09:14:19.093Z] Copying: 752660/1048576 [kB] (9924 kBps) [2024-11-20T09:14:20.107Z] Copying: 762540/1048576 [kB] (9880 kBps) [2024-11-20T09:14:21.049Z] Copying: 772396/1048576 [kB] (9856 kBps) [2024-11-20T09:14:21.993Z] Copying: 781896/1048576 [kB] (9500 kBps) [2024-11-20T09:14:22.935Z] Copying: 791268/1048576 [kB] (9372 kBps) [2024-11-20T09:14:23.934Z] Copying: 800152/1048576 [kB] (8884 kBps) [2024-11-20T09:14:25.323Z] Copying: 809448/1048576 [kB] (9296 kBps) [2024-11-20T09:14:26.266Z] Copying: 818968/1048576 [kB] (9520 kBps) [2024-11-20T09:14:27.209Z] Copying: 810/1024 [MB] (10 MBps) [2024-11-20T09:14:28.152Z] Copying: 820/1024 [MB] (10 MBps) [2024-11-20T09:14:29.095Z] Copying: 850176/1048576 [kB] (9880 kBps) [2024-11-20T09:14:30.039Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-20T09:14:30.983Z] Copying: 851/1024 [MB] (11 MBps) [2024-11-20T09:14:31.926Z] Copying: 881924/1048576 [kB] (10012 kBps) [2024-11-20T09:14:33.312Z] Copying: 892112/1048576 [kB] (10188 kBps) [2024-11-20T09:14:34.258Z] Copying: 902000/1048576 [kB] (9888 kBps) [2024-11-20T09:14:35.205Z] Copying: 911576/1048576 [kB] (9576 kBps) [2024-11-20T09:14:36.173Z] Copying: 920812/1048576 [kB] (9236 kBps) [2024-11-20T09:14:37.119Z] Copying: 930464/1048576 [kB] (9652 kBps) [2024-11-20T09:14:38.064Z] Copying: 939892/1048576 [kB] (9428 kBps) [2024-11-20T09:14:39.009Z] Copying: 949000/1048576 [kB] (9108 kBps) [2024-11-20T09:14:39.947Z] Copying: 958312/1048576 [kB] (9312 kBps) [2024-11-20T09:14:41.334Z] Copying: 968152/1048576 [kB] (9840 kBps) [2024-11-20T09:14:42.276Z] Copying: 977504/1048576 [kB] (9352 kBps) [2024-11-20T09:14:43.220Z] Copying: 987168/1048576 [kB] (9664 kBps) [2024-11-20T09:14:44.164Z] Copying: 997056/1048576 [kB] (9888 kBps) [2024-11-20T09:14:45.105Z] Copying: 983/1024 [MB] (10 MBps) [2024-11-20T09:14:46.063Z] Copying: 994/1024 [MB] (10 MBps) [2024-11-20T09:14:47.015Z] Copying: 1027752/1048576 [kB] (9800 kBps) [2024-11-20T09:14:47.956Z] Copying: 1037864/1048576 [kB] (10112 kBps) [2024-11-20T09:14:48.217Z] Copying: 1047132/1048576 [kB] (9268 kBps) [2024-11-20T09:14:48.217Z] Copying: 1024/1024 [MB] (average 10164 kBps)[2024-11-20 09:14:48.078902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.079101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:09.298 [2024-11-20 09:14:48.079191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:09.298 [2024-11-20 09:14:48.079224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.079272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:09.298 [2024-11-20 09:14:48.082440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.082634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:09.298 [2024-11-20 09:14:48.082731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:21:09.298 [2024-11-20 09:14:48.082768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.085970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.086178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:09.298 [2024-11-20 09:14:48.086267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:21:09.298 [2024-11-20 09:14:48.086296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.105803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.106101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:09.298 [2024-11-20 09:14:48.106221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.467 ms 00:21:09.298 [2024-11-20 09:14:48.106255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.112542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.112757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:09.298 [2024-11-20 09:14:48.112884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.229 ms 00:21:09.298 [2024-11-20 09:14:48.112916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.141782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.142078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:09.298 [2024-11-20 09:14:48.142220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.731 ms 00:21:09.298 [2024-11-20 09:14:48.142254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.159400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.159695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:09.298 [2024-11-20 09:14:48.159731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.075 ms 00:21:09.298 [2024-11-20 09:14:48.159741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.159928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.159943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:09.298 [2024-11-20 09:14:48.159965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:21:09.298 [2024-11-20 09:14:48.159973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.186948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.187008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:09.298 [2024-11-20 09:14:48.187024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.955 ms 00:21:09.298 [2024-11-20 09:14:48.187032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.298 [2024-11-20 09:14:48.213638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.298 [2024-11-20 09:14:48.213702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:09.298 [2024-11-20 09:14:48.213733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.545 ms 00:21:09.298 [2024-11-20 09:14:48.213742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.560 [2024-11-20 09:14:48.239218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.560 [2024-11-20 09:14:48.239274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:09.560 [2024-11-20 09:14:48.239289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.410 ms 00:21:09.560 [2024-11-20 09:14:48.239299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.560 [2024-11-20 09:14:48.264703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.560 [2024-11-20 09:14:48.264763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:09.560 [2024-11-20 09:14:48.264778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.308 ms 00:21:09.560 [2024-11-20 09:14:48.264786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.560 [2024-11-20 09:14:48.264840] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:09.560 [2024-11-20 09:14:48.264858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.264997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:09.560 [2024-11-20 09:14:48.265495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:09.561 [2024-11-20 09:14:48.265777] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:09.561 [2024-11-20 09:14:48.265794] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ace07a7f-bff5-45b2-a4fb-6c01762c9936 00:21:09.561 [2024-11-20 09:14:48.265803] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:09.561 [2024-11-20 09:14:48.265815] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:09.561 [2024-11-20 09:14:48.265823] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:09.561 [2024-11-20 09:14:48.265833] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:09.561 [2024-11-20 09:14:48.265840] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:09.561 [2024-11-20 09:14:48.265849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:09.561 [2024-11-20 09:14:48.265857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:09.561 [2024-11-20 09:14:48.265885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:09.561 [2024-11-20 09:14:48.265893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:09.561 [2024-11-20 09:14:48.265903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.561 [2024-11-20 09:14:48.265913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:09.561 [2024-11-20 09:14:48.265923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.065 ms 00:21:09.561 [2024-11-20 09:14:48.265932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.280026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.561 [2024-11-20 09:14:48.280079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:09.561 [2024-11-20 09:14:48.280093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.049 ms 00:21:09.561 [2024-11-20 09:14:48.280102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.280519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:09.561 [2024-11-20 09:14:48.280530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:09.561 [2024-11-20 09:14:48.280539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:21:09.561 [2024-11-20 09:14:48.280548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.317262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.561 [2024-11-20 09:14:48.317329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:09.561 [2024-11-20 09:14:48.317343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.561 [2024-11-20 09:14:48.317351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.317441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.561 [2024-11-20 09:14:48.317450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:09.561 [2024-11-20 09:14:48.317460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.561 [2024-11-20 09:14:48.317469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.317591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.561 [2024-11-20 09:14:48.317602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:09.561 [2024-11-20 09:14:48.317611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.561 [2024-11-20 09:14:48.317620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.317637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.561 [2024-11-20 09:14:48.317645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:09.561 [2024-11-20 09:14:48.317654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.561 [2024-11-20 09:14:48.317661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.561 [2024-11-20 09:14:48.405504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.561 [2024-11-20 09:14:48.405570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:09.561 [2024-11-20 09:14:48.405585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.561 [2024-11-20 09:14:48.405594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.477480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.477543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:09.823 [2024-11-20 09:14:48.477557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.477566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.477633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.477652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:09.823 [2024-11-20 09:14:48.477662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.477670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.477732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.477742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:09.823 [2024-11-20 09:14:48.477751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.477760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.477857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.477886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:09.823 [2024-11-20 09:14:48.477896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.477905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.477952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.477965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:09.823 [2024-11-20 09:14:48.477978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.477990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.478038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.478049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:09.823 [2024-11-20 09:14:48.478061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.478069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.478119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:09.823 [2024-11-20 09:14:48.478129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:09.823 [2024-11-20 09:14:48.478137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:09.823 [2024-11-20 09:14:48.478146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:09.823 [2024-11-20 09:14:48.478299] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 399.362 ms, result 0 00:21:11.203 00:21:11.203 00:21:11.203 09:14:49 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:11.203 [2024-11-20 09:14:49.779389] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:21:11.203 [2024-11-20 09:14:49.779546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76169 ] 00:21:11.203 [2024-11-20 09:14:49.946193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.203 [2024-11-20 09:14:50.087253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.776 [2024-11-20 09:14:50.391777] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.776 [2024-11-20 09:14:50.391863] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:11.776 [2024-11-20 09:14:50.558582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.558660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.776 [2024-11-20 09:14:50.558681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.776 [2024-11-20 09:14:50.558691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.558759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.558770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.776 [2024-11-20 09:14:50.558782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:11.776 [2024-11-20 09:14:50.558791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.558813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.776 [2024-11-20 09:14:50.559573] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.776 [2024-11-20 09:14:50.559594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.559603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.776 [2024-11-20 09:14:50.559613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:21:11.776 [2024-11-20 09:14:50.559622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.561508] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:11.776 [2024-11-20 09:14:50.576458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.576511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:11.776 [2024-11-20 09:14:50.576527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.950 ms 00:21:11.776 [2024-11-20 09:14:50.576537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.576626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.576636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:11.776 [2024-11-20 09:14:50.576647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:11.776 [2024-11-20 09:14:50.576654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.585801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.585846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.776 [2024-11-20 09:14:50.585858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.066 ms 00:21:11.776 [2024-11-20 09:14:50.585882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.585975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.585985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.776 [2024-11-20 09:14:50.585995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:21:11.776 [2024-11-20 09:14:50.586004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.586055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.586065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.776 [2024-11-20 09:14:50.586074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:11.776 [2024-11-20 09:14:50.586082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.586109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.776 [2024-11-20 09:14:50.590339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.590377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.776 [2024-11-20 09:14:50.590390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:21:11.776 [2024-11-20 09:14:50.590403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.590444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.590454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.776 [2024-11-20 09:14:50.590464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:11.776 [2024-11-20 09:14:50.590473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.590532] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:11.776 [2024-11-20 09:14:50.590558] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:11.776 [2024-11-20 09:14:50.590600] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:11.776 [2024-11-20 09:14:50.590622] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:11.776 [2024-11-20 09:14:50.590733] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.776 [2024-11-20 09:14:50.590746] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.776 [2024-11-20 09:14:50.590758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:11.776 [2024-11-20 09:14:50.590770] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.776 [2024-11-20 09:14:50.590781] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.776 [2024-11-20 09:14:50.590791] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.776 [2024-11-20 09:14:50.590801] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.776 [2024-11-20 09:14:50.590811] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.776 [2024-11-20 09:14:50.590820] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.776 [2024-11-20 09:14:50.590832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.590842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.776 [2024-11-20 09:14:50.590851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:21:11.776 [2024-11-20 09:14:50.590859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.590965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.776 [2024-11-20 09:14:50.590977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.776 [2024-11-20 09:14:50.590987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:11.776 [2024-11-20 09:14:50.590996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.776 [2024-11-20 09:14:50.591109] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.776 [2024-11-20 09:14:50.591123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.776 [2024-11-20 09:14:50.591134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.776 [2024-11-20 09:14:50.591143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.776 [2024-11-20 09:14:50.591162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.776 [2024-11-20 09:14:50.591180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.776 [2024-11-20 09:14:50.591189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.776 [2024-11-20 09:14:50.591207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.776 [2024-11-20 09:14:50.591214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.776 [2024-11-20 09:14:50.591222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.776 [2024-11-20 09:14:50.591230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.776 [2024-11-20 09:14:50.591238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.776 [2024-11-20 09:14:50.591256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.776 [2024-11-20 09:14:50.591270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.776 [2024-11-20 09:14:50.591277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.776 [2024-11-20 09:14:50.591291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.776 [2024-11-20 09:14:50.591305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.776 [2024-11-20 09:14:50.591313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.776 [2024-11-20 09:14:50.591319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.776 [2024-11-20 09:14:50.591325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.776 [2024-11-20 09:14:50.591332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.777 [2024-11-20 09:14:50.591338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.777 [2024-11-20 09:14:50.591345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.777 [2024-11-20 09:14:50.591351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.777 [2024-11-20 09:14:50.591358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.777 [2024-11-20 09:14:50.591365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.777 [2024-11-20 09:14:50.591372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.777 [2024-11-20 09:14:50.591379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.777 [2024-11-20 09:14:50.591386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.777 [2024-11-20 09:14:50.591392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.777 [2024-11-20 09:14:50.591399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.777 [2024-11-20 09:14:50.591406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.777 [2024-11-20 09:14:50.591414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.777 [2024-11-20 09:14:50.591421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.777 [2024-11-20 09:14:50.591429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.777 [2024-11-20 09:14:50.591435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.777 [2024-11-20 09:14:50.591443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.777 [2024-11-20 09:14:50.591450] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.777 [2024-11-20 09:14:50.591458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.777 [2024-11-20 09:14:50.591467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.777 [2024-11-20 09:14:50.591475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.777 [2024-11-20 09:14:50.591484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.777 [2024-11-20 09:14:50.591492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.777 [2024-11-20 09:14:50.591498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.777 [2024-11-20 09:14:50.591506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.777 [2024-11-20 09:14:50.591512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.777 [2024-11-20 09:14:50.591519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.777 [2024-11-20 09:14:50.591528] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.777 [2024-11-20 09:14:50.591538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.777 [2024-11-20 09:14:50.591556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.777 [2024-11-20 09:14:50.591564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.777 [2024-11-20 09:14:50.591571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.777 [2024-11-20 09:14:50.591578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.777 [2024-11-20 09:14:50.591586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.777 [2024-11-20 09:14:50.591593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.777 [2024-11-20 09:14:50.591600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.777 [2024-11-20 09:14:50.591607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.777 [2024-11-20 09:14:50.591615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.777 [2024-11-20 09:14:50.591651] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.777 [2024-11-20 09:14:50.591662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.777 [2024-11-20 09:14:50.591681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.777 [2024-11-20 09:14:50.591689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.777 [2024-11-20 09:14:50.591696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.777 [2024-11-20 09:14:50.591704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.591712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.777 [2024-11-20 09:14:50.591720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:21:11.777 [2024-11-20 09:14:50.591727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.625747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.625810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.777 [2024-11-20 09:14:50.625825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.970 ms 00:21:11.777 [2024-11-20 09:14:50.625834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.625973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.625984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.777 [2024-11-20 09:14:50.625993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:11.777 [2024-11-20 09:14:50.626003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.673293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.673356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.777 [2024-11-20 09:14:50.673372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.206 ms 00:21:11.777 [2024-11-20 09:14:50.673382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.673460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.673470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.777 [2024-11-20 09:14:50.673481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.777 [2024-11-20 09:14:50.673493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.674168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.674203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.777 [2024-11-20 09:14:50.674216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:21:11.777 [2024-11-20 09:14:50.674225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.674393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.674405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.777 [2024-11-20 09:14:50.674414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:21:11.777 [2024-11-20 09:14:50.674429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.777 [2024-11-20 09:14:50.690939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.777 [2024-11-20 09:14:50.690991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.777 [2024-11-20 09:14:50.691008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.487 ms 00:21:11.777 [2024-11-20 09:14:50.691017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.705739] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:12.037 [2024-11-20 09:14:50.705793] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:12.037 [2024-11-20 09:14:50.705808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.705818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:12.037 [2024-11-20 09:14:50.705829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.648 ms 00:21:12.037 [2024-11-20 09:14:50.705837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.732556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.732630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:12.037 [2024-11-20 09:14:50.732645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.633 ms 00:21:12.037 [2024-11-20 09:14:50.732654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.748127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.748185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:12.037 [2024-11-20 09:14:50.748200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.105 ms 00:21:12.037 [2024-11-20 09:14:50.748208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.761620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.761673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:12.037 [2024-11-20 09:14:50.761687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.353 ms 00:21:12.037 [2024-11-20 09:14:50.761695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.762418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.762451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:12.037 [2024-11-20 09:14:50.762462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:21:12.037 [2024-11-20 09:14:50.762474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.832232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.832316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:12.037 [2024-11-20 09:14:50.832341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.732 ms 00:21:12.037 [2024-11-20 09:14:50.832352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.844841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:12.037 [2024-11-20 09:14:50.849094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.849140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:12.037 [2024-11-20 09:14:50.849157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.661 ms 00:21:12.037 [2024-11-20 09:14:50.849168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.037 [2024-11-20 09:14:50.849298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.037 [2024-11-20 09:14:50.849311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:12.038 [2024-11-20 09:14:50.849321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:12.038 [2024-11-20 09:14:50.849332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.038 [2024-11-20 09:14:50.849407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.038 [2024-11-20 09:14:50.849418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:12.038 [2024-11-20 09:14:50.849428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:12.038 [2024-11-20 09:14:50.849436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.038 [2024-11-20 09:14:50.849456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.038 [2024-11-20 09:14:50.849465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:12.038 [2024-11-20 09:14:50.849475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:12.038 [2024-11-20 09:14:50.849483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.038 [2024-11-20 09:14:50.849519] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:12.038 [2024-11-20 09:14:50.849532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.038 [2024-11-20 09:14:50.849540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:12.038 [2024-11-20 09:14:50.849548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:12.038 [2024-11-20 09:14:50.849556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.038 [2024-11-20 09:14:50.877680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.038 [2024-11-20 09:14:50.877751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:12.038 [2024-11-20 09:14:50.877767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.102 ms 00:21:12.038 [2024-11-20 09:14:50.877784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.038 [2024-11-20 09:14:50.877906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:12.038 [2024-11-20 09:14:50.877919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:12.038 [2024-11-20 09:14:50.877930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:12.038 [2024-11-20 09:14:50.877938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.038 [2024-11-20 09:14:50.879442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.320 ms, result 0 00:21:13.423  [2024-11-20T09:14:53.286Z] Copying: 10/1024 [MB] (10 MBps) [2024-11-20T09:14:54.293Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-20T09:14:55.239Z] Copying: 31668/1048576 [kB] (10100 kBps) [2024-11-20T09:14:56.181Z] Copying: 41896/1048576 [kB] (10228 kBps) [2024-11-20T09:14:57.124Z] Copying: 51560/1048576 [kB] (9664 kBps) [2024-11-20T09:14:58.509Z] Copying: 61/1024 [MB] (11 MBps) [2024-11-20T09:14:59.081Z] Copying: 71/1024 [MB] (10 MBps) [2024-11-20T09:15:00.470Z] Copying: 81/1024 [MB] (10 MBps) [2024-11-20T09:15:01.407Z] Copying: 94/1024 [MB] (12 MBps) [2024-11-20T09:15:02.371Z] Copying: 120/1024 [MB] (25 MBps) [2024-11-20T09:15:03.312Z] Copying: 167/1024 [MB] (47 MBps) [2024-11-20T09:15:04.246Z] Copying: 212/1024 [MB] (45 MBps) [2024-11-20T09:15:05.178Z] Copying: 260/1024 [MB] (47 MBps) [2024-11-20T09:15:06.112Z] Copying: 308/1024 [MB] (48 MBps) [2024-11-20T09:15:07.488Z] Copying: 352/1024 [MB] (44 MBps) [2024-11-20T09:15:08.085Z] Copying: 400/1024 [MB] (47 MBps) [2024-11-20T09:15:09.458Z] Copying: 449/1024 [MB] (49 MBps) [2024-11-20T09:15:10.390Z] Copying: 495/1024 [MB] (46 MBps) [2024-11-20T09:15:11.323Z] Copying: 544/1024 [MB] (48 MBps) [2024-11-20T09:15:12.257Z] Copying: 591/1024 [MB] (46 MBps) [2024-11-20T09:15:13.191Z] Copying: 642/1024 [MB] (50 MBps) [2024-11-20T09:15:14.137Z] Copying: 689/1024 [MB] (47 MBps) [2024-11-20T09:15:15.072Z] Copying: 737/1024 [MB] (47 MBps) [2024-11-20T09:15:16.445Z] Copying: 783/1024 [MB] (46 MBps) [2024-11-20T09:15:17.380Z] Copying: 831/1024 [MB] (47 MBps) [2024-11-20T09:15:18.314Z] Copying: 877/1024 [MB] (46 MBps) [2024-11-20T09:15:19.249Z] Copying: 925/1024 [MB] (47 MBps) [2024-11-20T09:15:20.183Z] Copying: 971/1024 [MB] (46 MBps) [2024-11-20T09:15:20.441Z] Copying: 1018/1024 [MB] (46 MBps) [2024-11-20T09:15:20.699Z] Copying: 1024/1024 [MB] (average 35 MBps)[2024-11-20 09:15:20.586582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.586655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.780 [2024-11-20 09:15:20.586671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:41.780 [2024-11-20 09:15:20.586682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.586708] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.780 [2024-11-20 09:15:20.590015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.590065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.780 [2024-11-20 09:15:20.590085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:21:41.780 [2024-11-20 09:15:20.590095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.590377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.590400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.780 [2024-11-20 09:15:20.590412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:21:41.780 [2024-11-20 09:15:20.590421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.594072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.594097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.780 [2024-11-20 09:15:20.594104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.634 ms 00:21:41.780 [2024-11-20 09:15:20.594111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.599854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.599905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.780 [2024-11-20 09:15:20.599915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:21:41.780 [2024-11-20 09:15:20.599921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.620097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.620149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:41.780 [2024-11-20 09:15:20.620160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.109 ms 00:21:41.780 [2024-11-20 09:15:20.620167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.632555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.632606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:41.780 [2024-11-20 09:15:20.632618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.348 ms 00:21:41.780 [2024-11-20 09:15:20.632625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.632741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.632757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:41.780 [2024-11-20 09:15:20.632764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:41.780 [2024-11-20 09:15:20.632769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.652779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.652824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:41.780 [2024-11-20 09:15:20.652835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.997 ms 00:21:41.780 [2024-11-20 09:15:20.652842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.671842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.671909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:41.780 [2024-11-20 09:15:20.671920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.970 ms 00:21:41.780 [2024-11-20 09:15:20.671926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.780 [2024-11-20 09:15:20.690241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.780 [2024-11-20 09:15:20.690288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:41.780 [2024-11-20 09:15:20.690298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.285 ms 00:21:41.780 [2024-11-20 09:15:20.690304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.040 [2024-11-20 09:15:20.708721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.040 [2024-11-20 09:15:20.708771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.040 [2024-11-20 09:15:20.708782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.337 ms 00:21:42.040 [2024-11-20 09:15:20.708788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.040 [2024-11-20 09:15:20.708817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.040 [2024-11-20 09:15:20.708829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.040 [2024-11-20 09:15:20.708907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.708995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.041 [2024-11-20 09:15:20.709440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.042 [2024-11-20 09:15:20.709453] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.042 [2024-11-20 09:15:20.709465] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ace07a7f-bff5-45b2-a4fb-6c01762c9936 00:21:42.042 [2024-11-20 09:15:20.709472] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:42.042 [2024-11-20 09:15:20.709478] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:42.042 [2024-11-20 09:15:20.709484] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:42.042 [2024-11-20 09:15:20.709490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:42.042 [2024-11-20 09:15:20.709496] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.042 [2024-11-20 09:15:20.709502] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.042 [2024-11-20 09:15:20.709514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.042 [2024-11-20 09:15:20.709519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.042 [2024-11-20 09:15:20.709524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.042 [2024-11-20 09:15:20.709530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.042 [2024-11-20 09:15:20.709536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.042 [2024-11-20 09:15:20.709543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:21:42.042 [2024-11-20 09:15:20.709549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.719789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.042 [2024-11-20 09:15:20.719833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.042 [2024-11-20 09:15:20.719843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.221 ms 00:21:42.042 [2024-11-20 09:15:20.719850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.720144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.042 [2024-11-20 09:15:20.720156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.042 [2024-11-20 09:15:20.720163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:21:42.042 [2024-11-20 09:15:20.720176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.747132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.747183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.042 [2024-11-20 09:15:20.747193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.747200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.747258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.747265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.042 [2024-11-20 09:15:20.747271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.747281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.747342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.747350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.042 [2024-11-20 09:15:20.747356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.747362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.747374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.747381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.042 [2024-11-20 09:15:20.747387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.747394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.811520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.811569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.042 [2024-11-20 09:15:20.811579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.811585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.042 [2024-11-20 09:15:20.863230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.042 [2024-11-20 09:15:20.863323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.042 [2024-11-20 09:15:20.863370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.042 [2024-11-20 09:15:20.863465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.042 [2024-11-20 09:15:20.863510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.042 [2024-11-20 09:15:20.863561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.042 [2024-11-20 09:15:20.863612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.042 [2024-11-20 09:15:20.863618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.042 [2024-11-20 09:15:20.863625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.042 [2024-11-20 09:15:20.863723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.123 ms, result 0 00:21:42.609 00:21:42.609 00:21:42.609 09:15:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:45.138 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:45.138 09:15:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:45.138 [2024-11-20 09:15:23.640462] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:21:45.138 [2024-11-20 09:15:23.640592] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76517 ] 00:21:45.138 [2024-11-20 09:15:23.802419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.138 [2024-11-20 09:15:23.903823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.396 [2024-11-20 09:15:24.159243] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.396 [2024-11-20 09:15:24.159302] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:45.396 [2024-11-20 09:15:24.312161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.396 [2024-11-20 09:15:24.312235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:45.396 [2024-11-20 09:15:24.312254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:45.396 [2024-11-20 09:15:24.312262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.396 [2024-11-20 09:15:24.312315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.396 [2024-11-20 09:15:24.312325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.396 [2024-11-20 09:15:24.312335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:45.396 [2024-11-20 09:15:24.312343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.396 [2024-11-20 09:15:24.312362] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:45.396 [2024-11-20 09:15:24.313107] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:45.396 [2024-11-20 09:15:24.313130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.396 [2024-11-20 09:15:24.313139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.396 [2024-11-20 09:15:24.313147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:21:45.396 [2024-11-20 09:15:24.313155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.314307] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:45.655 [2024-11-20 09:15:24.326907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.326961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:45.655 [2024-11-20 09:15:24.326975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.598 ms 00:21:45.655 [2024-11-20 09:15:24.326984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.327071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.327081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:45.655 [2024-11-20 09:15:24.327089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:45.655 [2024-11-20 09:15:24.327097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.332723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.332763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.655 [2024-11-20 09:15:24.332774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.540 ms 00:21:45.655 [2024-11-20 09:15:24.332782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.332868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.332908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.655 [2024-11-20 09:15:24.332916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:45.655 [2024-11-20 09:15:24.332924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.332982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.332991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:45.655 [2024-11-20 09:15:24.333000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:45.655 [2024-11-20 09:15:24.333007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.333030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:45.655 [2024-11-20 09:15:24.336545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.336577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.655 [2024-11-20 09:15:24.336587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:21:45.655 [2024-11-20 09:15:24.336597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.336632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.336640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:45.655 [2024-11-20 09:15:24.336648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:45.655 [2024-11-20 09:15:24.336656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.336679] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:45.655 [2024-11-20 09:15:24.336697] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:45.655 [2024-11-20 09:15:24.336731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:45.655 [2024-11-20 09:15:24.336748] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:45.655 [2024-11-20 09:15:24.336850] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:45.655 [2024-11-20 09:15:24.336860] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:45.655 [2024-11-20 09:15:24.336882] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:45.655 [2024-11-20 09:15:24.336892] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:45.655 [2024-11-20 09:15:24.336900] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:45.655 [2024-11-20 09:15:24.336908] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:45.655 [2024-11-20 09:15:24.336916] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:45.655 [2024-11-20 09:15:24.336923] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:45.655 [2024-11-20 09:15:24.336929] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:45.655 [2024-11-20 09:15:24.336939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.336946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:45.655 [2024-11-20 09:15:24.336955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:21:45.655 [2024-11-20 09:15:24.336962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.337046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.655 [2024-11-20 09:15:24.337053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:45.655 [2024-11-20 09:15:24.337061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:45.655 [2024-11-20 09:15:24.337068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.655 [2024-11-20 09:15:24.337200] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:45.655 [2024-11-20 09:15:24.337213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:45.655 [2024-11-20 09:15:24.337222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.655 [2024-11-20 09:15:24.337229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.655 [2024-11-20 09:15:24.337237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:45.655 [2024-11-20 09:15:24.337244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:45.655 [2024-11-20 09:15:24.337250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:45.655 [2024-11-20 09:15:24.337258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:45.655 [2024-11-20 09:15:24.337265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:45.655 [2024-11-20 09:15:24.337272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.655 [2024-11-20 09:15:24.337279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:45.655 [2024-11-20 09:15:24.337285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:45.655 [2024-11-20 09:15:24.337291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:45.655 [2024-11-20 09:15:24.337298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:45.655 [2024-11-20 09:15:24.337306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:45.655 [2024-11-20 09:15:24.337317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.655 [2024-11-20 09:15:24.337323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:45.655 [2024-11-20 09:15:24.337330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:45.655 [2024-11-20 09:15:24.337338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.655 [2024-11-20 09:15:24.337344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:45.655 [2024-11-20 09:15:24.337351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:45.655 [2024-11-20 09:15:24.337358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.655 [2024-11-20 09:15:24.337364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:45.655 [2024-11-20 09:15:24.337370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.656 [2024-11-20 09:15:24.337384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:45.656 [2024-11-20 09:15:24.337390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.656 [2024-11-20 09:15:24.337402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:45.656 [2024-11-20 09:15:24.337409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:45.656 [2024-11-20 09:15:24.337422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:45.656 [2024-11-20 09:15:24.337428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.656 [2024-11-20 09:15:24.337440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:45.656 [2024-11-20 09:15:24.337447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:45.656 [2024-11-20 09:15:24.337453] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:45.656 [2024-11-20 09:15:24.337460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:45.656 [2024-11-20 09:15:24.337466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:45.656 [2024-11-20 09:15:24.337472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:45.656 [2024-11-20 09:15:24.337485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:45.656 [2024-11-20 09:15:24.337492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337499] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:45.656 [2024-11-20 09:15:24.337506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:45.656 [2024-11-20 09:15:24.337513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:45.656 [2024-11-20 09:15:24.337520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:45.656 [2024-11-20 09:15:24.337527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:45.656 [2024-11-20 09:15:24.337533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:45.656 [2024-11-20 09:15:24.337540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:45.656 [2024-11-20 09:15:24.337548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:45.656 [2024-11-20 09:15:24.337555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:45.656 [2024-11-20 09:15:24.337561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:45.656 [2024-11-20 09:15:24.337569] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:45.656 [2024-11-20 09:15:24.337577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:45.656 [2024-11-20 09:15:24.337593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:45.656 [2024-11-20 09:15:24.337600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:45.656 [2024-11-20 09:15:24.337606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:45.656 [2024-11-20 09:15:24.337613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:45.656 [2024-11-20 09:15:24.337620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:45.656 [2024-11-20 09:15:24.337627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:45.656 [2024-11-20 09:15:24.337634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:45.656 [2024-11-20 09:15:24.337640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:45.656 [2024-11-20 09:15:24.337647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:45.656 [2024-11-20 09:15:24.337682] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:45.656 [2024-11-20 09:15:24.337692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:45.656 [2024-11-20 09:15:24.337707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:45.656 [2024-11-20 09:15:24.337714] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:45.656 [2024-11-20 09:15:24.337721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:45.656 [2024-11-20 09:15:24.337728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.337736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:45.656 [2024-11-20 09:15:24.337744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:21:45.656 [2024-11-20 09:15:24.337750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.364052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.364094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.656 [2024-11-20 09:15:24.364105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.259 ms 00:21:45.656 [2024-11-20 09:15:24.364113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.364211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.364219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:45.656 [2024-11-20 09:15:24.364227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:45.656 [2024-11-20 09:15:24.364235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.405289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.405338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.656 [2024-11-20 09:15:24.405352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.990 ms 00:21:45.656 [2024-11-20 09:15:24.405359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.405415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.405424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.656 [2024-11-20 09:15:24.405433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:45.656 [2024-11-20 09:15:24.405445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.405832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.405857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.656 [2024-11-20 09:15:24.405866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:21:45.656 [2024-11-20 09:15:24.405899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.406026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.406035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.656 [2024-11-20 09:15:24.406043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:21:45.656 [2024-11-20 09:15:24.406056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.419088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.419130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.656 [2024-11-20 09:15:24.419145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.012 ms 00:21:45.656 [2024-11-20 09:15:24.419152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.431723] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:45.656 [2024-11-20 09:15:24.431773] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:45.656 [2024-11-20 09:15:24.431785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.431793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:45.656 [2024-11-20 09:15:24.431803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.519 ms 00:21:45.656 [2024-11-20 09:15:24.431811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.456657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.456719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:45.656 [2024-11-20 09:15:24.456731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.787 ms 00:21:45.656 [2024-11-20 09:15:24.456739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.469515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.469565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:45.656 [2024-11-20 09:15:24.469576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.705 ms 00:21:45.656 [2024-11-20 09:15:24.469584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.481953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.482010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:45.656 [2024-11-20 09:15:24.482022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.314 ms 00:21:45.656 [2024-11-20 09:15:24.482029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.482692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.482715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:45.656 [2024-11-20 09:15:24.482725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:21:45.656 [2024-11-20 09:15:24.482735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.540438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.540496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:45.656 [2024-11-20 09:15:24.540520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.683 ms 00:21:45.656 [2024-11-20 09:15:24.540529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.551711] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:45.656 [2024-11-20 09:15:24.554562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.554597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:45.656 [2024-11-20 09:15:24.554609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.958 ms 00:21:45.656 [2024-11-20 09:15:24.554619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.554731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.554742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:45.656 [2024-11-20 09:15:24.554752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:45.656 [2024-11-20 09:15:24.554763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.554826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.554842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:45.656 [2024-11-20 09:15:24.554850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:45.656 [2024-11-20 09:15:24.554858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.554888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.554897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:45.656 [2024-11-20 09:15:24.554905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:45.656 [2024-11-20 09:15:24.554912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.656 [2024-11-20 09:15:24.554940] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:45.656 [2024-11-20 09:15:24.554951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.656 [2024-11-20 09:15:24.554959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:45.656 [2024-11-20 09:15:24.554966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:45.656 [2024-11-20 09:15:24.554973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.914 [2024-11-20 09:15:24.579810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.914 [2024-11-20 09:15:24.579866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:45.914 [2024-11-20 09:15:24.579890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.818 ms 00:21:45.914 [2024-11-20 09:15:24.579904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.914 [2024-11-20 09:15:24.579993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.914 [2024-11-20 09:15:24.580003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:45.914 [2024-11-20 09:15:24.580011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:45.914 [2024-11-20 09:15:24.580018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.914 [2024-11-20 09:15:24.581091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.492 ms, result 0 00:21:46.845  [2024-11-20T09:15:26.694Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-20T09:15:27.626Z] Copying: 93/1024 [MB] (46 MBps) [2024-11-20T09:15:28.995Z] Copying: 141/1024 [MB] (47 MBps) [2024-11-20T09:15:29.926Z] Copying: 188/1024 [MB] (47 MBps) [2024-11-20T09:15:30.859Z] Copying: 238/1024 [MB] (49 MBps) [2024-11-20T09:15:31.792Z] Copying: 283/1024 [MB] (45 MBps) [2024-11-20T09:15:32.766Z] Copying: 333/1024 [MB] (49 MBps) [2024-11-20T09:15:33.724Z] Copying: 382/1024 [MB] (48 MBps) [2024-11-20T09:15:34.655Z] Copying: 424/1024 [MB] (41 MBps) [2024-11-20T09:15:36.029Z] Copying: 464/1024 [MB] (40 MBps) [2024-11-20T09:15:36.960Z] Copying: 504/1024 [MB] (39 MBps) [2024-11-20T09:15:37.893Z] Copying: 549/1024 [MB] (45 MBps) [2024-11-20T09:15:38.860Z] Copying: 596/1024 [MB] (46 MBps) [2024-11-20T09:15:39.791Z] Copying: 638/1024 [MB] (41 MBps) [2024-11-20T09:15:40.724Z] Copying: 682/1024 [MB] (44 MBps) [2024-11-20T09:15:41.656Z] Copying: 729/1024 [MB] (46 MBps) [2024-11-20T09:15:43.032Z] Copying: 781/1024 [MB] (52 MBps) [2024-11-20T09:15:43.601Z] Copying: 824/1024 [MB] (42 MBps) [2024-11-20T09:15:44.973Z] Copying: 864/1024 [MB] (39 MBps) [2024-11-20T09:15:45.908Z] Copying: 908/1024 [MB] (44 MBps) [2024-11-20T09:15:46.841Z] Copying: 944/1024 [MB] (35 MBps) [2024-11-20T09:15:47.775Z] Copying: 989/1024 [MB] (45 MBps) [2024-11-20T09:15:48.740Z] Copying: 1023/1024 [MB] (33 MBps) [2024-11-20T09:15:48.740Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-11-20 09:15:48.406518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.406724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:09.821 [2024-11-20 09:15:48.406745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:09.821 [2024-11-20 09:15:48.406765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.409443] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:09.821 [2024-11-20 09:15:48.413567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.413601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:09.821 [2024-11-20 09:15:48.413612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:22:09.821 [2024-11-20 09:15:48.413622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.425783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.425852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:09.821 [2024-11-20 09:15:48.425866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.372 ms 00:22:09.821 [2024-11-20 09:15:48.425887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.444103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.444154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:09.821 [2024-11-20 09:15:48.444166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.191 ms 00:22:09.821 [2024-11-20 09:15:48.444173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.450342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.450374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:09.821 [2024-11-20 09:15:48.450385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:22:09.821 [2024-11-20 09:15:48.450394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.473994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.474037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:09.821 [2024-11-20 09:15:48.474049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.533 ms 00:22:09.821 [2024-11-20 09:15:48.474057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.487911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.487954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:09.821 [2024-11-20 09:15:48.487967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.813 ms 00:22:09.821 [2024-11-20 09:15:48.487976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.542337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.542419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:09.821 [2024-11-20 09:15:48.542433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.314 ms 00:22:09.821 [2024-11-20 09:15:48.542441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.566179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.566224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:09.821 [2024-11-20 09:15:48.566237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.721 ms 00:22:09.821 [2024-11-20 09:15:48.566246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.588988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.589036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:09.821 [2024-11-20 09:15:48.589048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.704 ms 00:22:09.821 [2024-11-20 09:15:48.589055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.611426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.611461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:09.821 [2024-11-20 09:15:48.611473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.335 ms 00:22:09.821 [2024-11-20 09:15:48.611481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.634013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.821 [2024-11-20 09:15:48.634050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:09.821 [2024-11-20 09:15:48.634061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.475 ms 00:22:09.821 [2024-11-20 09:15:48.634069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.821 [2024-11-20 09:15:48.634100] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:09.821 [2024-11-20 09:15:48.634115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122368 / 261120 wr_cnt: 1 state: open 00:22:09.821 [2024-11-20 09:15:48.634124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:09.821 [2024-11-20 09:15:48.634273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:09.822 [2024-11-20 09:15:48.634853] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:09.822 [2024-11-20 09:15:48.634860] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ace07a7f-bff5-45b2-a4fb-6c01762c9936 00:22:09.822 [2024-11-20 09:15:48.634868] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122368 00:22:09.822 [2024-11-20 09:15:48.634886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123328 00:22:09.822 [2024-11-20 09:15:48.634893] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122368 00:22:09.822 [2024-11-20 09:15:48.634901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0078 00:22:09.822 [2024-11-20 09:15:48.634908] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:09.822 [2024-11-20 09:15:48.634919] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:09.822 [2024-11-20 09:15:48.634932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:09.822 [2024-11-20 09:15:48.634938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:09.822 [2024-11-20 09:15:48.634944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:09.822 [2024-11-20 09:15:48.634951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.822 [2024-11-20 09:15:48.634959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:09.823 [2024-11-20 09:15:48.634967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:22:09.823 [2024-11-20 09:15:48.634974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.823 [2024-11-20 09:15:48.647282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.823 [2024-11-20 09:15:48.647313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:09.823 [2024-11-20 09:15:48.647328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:22:09.823 [2024-11-20 09:15:48.647340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.823 [2024-11-20 09:15:48.647677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:09.823 [2024-11-20 09:15:48.647692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:09.823 [2024-11-20 09:15:48.647700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:22:09.823 [2024-11-20 09:15:48.647707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.823 [2024-11-20 09:15:48.679945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.823 [2024-11-20 09:15:48.679987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:09.823 [2024-11-20 09:15:48.680002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.823 [2024-11-20 09:15:48.680011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.823 [2024-11-20 09:15:48.680075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.823 [2024-11-20 09:15:48.680084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:09.823 [2024-11-20 09:15:48.680091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.823 [2024-11-20 09:15:48.680098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.823 [2024-11-20 09:15:48.680191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.823 [2024-11-20 09:15:48.680202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:09.823 [2024-11-20 09:15:48.680210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.823 [2024-11-20 09:15:48.680219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:09.823 [2024-11-20 09:15:48.680234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:09.823 [2024-11-20 09:15:48.680241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:09.823 [2024-11-20 09:15:48.680249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:09.823 [2024-11-20 09:15:48.680256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.755725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.755772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:10.081 [2024-11-20 09:15:48.755788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.755796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.817593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.817635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:10.081 [2024-11-20 09:15:48.817646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.817653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.817707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.817716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:10.081 [2024-11-20 09:15:48.817724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.817731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.817782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.817791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:10.081 [2024-11-20 09:15:48.817799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.817806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.817904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.817914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:10.081 [2024-11-20 09:15:48.817922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.817929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.817961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.817970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:10.081 [2024-11-20 09:15:48.817977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.817984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.818017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.818026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:10.081 [2024-11-20 09:15:48.818033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.818040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.818084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:10.081 [2024-11-20 09:15:48.818093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:10.081 [2024-11-20 09:15:48.818101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:10.081 [2024-11-20 09:15:48.818108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:10.081 [2024-11-20 09:15:48.818219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 413.923 ms, result 0 00:22:13.365 00:22:13.365 00:22:13.365 09:15:52 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:13.365 [2024-11-20 09:15:52.212882] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:13.365 [2024-11-20 09:15:52.213011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76806 ] 00:22:13.630 [2024-11-20 09:15:52.371515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.630 [2024-11-20 09:15:52.473882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.894 [2024-11-20 09:15:52.730794] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:13.894 [2024-11-20 09:15:52.730857] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:14.155 [2024-11-20 09:15:52.887681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.155 [2024-11-20 09:15:52.887737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:14.155 [2024-11-20 09:15:52.887755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:14.155 [2024-11-20 09:15:52.887763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.155 [2024-11-20 09:15:52.887821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.155 [2024-11-20 09:15:52.887831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:14.155 [2024-11-20 09:15:52.887842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:14.155 [2024-11-20 09:15:52.887849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.155 [2024-11-20 09:15:52.887882] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:14.155 [2024-11-20 09:15:52.888675] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:14.155 [2024-11-20 09:15:52.888699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.155 [2024-11-20 09:15:52.888707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:14.155 [2024-11-20 09:15:52.888716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.835 ms 00:22:14.155 [2024-11-20 09:15:52.888723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.155 [2024-11-20 09:15:52.889904] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:14.155 [2024-11-20 09:15:52.902949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.155 [2024-11-20 09:15:52.902997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:14.155 [2024-11-20 09:15:52.903018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.046 ms 00:22:14.155 [2024-11-20 09:15:52.903026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.155 [2024-11-20 09:15:52.903111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.155 [2024-11-20 09:15:52.903122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:14.155 [2024-11-20 09:15:52.903130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:14.155 [2024-11-20 09:15:52.903137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.155 [2024-11-20 09:15:52.908740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.155 [2024-11-20 09:15:52.908783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:14.155 [2024-11-20 09:15:52.908796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.515 ms 00:22:14.155 [2024-11-20 09:15:52.908805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.908917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.156 [2024-11-20 09:15:52.908926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:14.156 [2024-11-20 09:15:52.908935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:22:14.156 [2024-11-20 09:15:52.908942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.908991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.156 [2024-11-20 09:15:52.909000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:14.156 [2024-11-20 09:15:52.909008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:14.156 [2024-11-20 09:15:52.909015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.909038] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:14.156 [2024-11-20 09:15:52.912498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.156 [2024-11-20 09:15:52.912531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:14.156 [2024-11-20 09:15:52.912542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.465 ms 00:22:14.156 [2024-11-20 09:15:52.912553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.912589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.156 [2024-11-20 09:15:52.912598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:14.156 [2024-11-20 09:15:52.912607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:14.156 [2024-11-20 09:15:52.912616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.912640] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:14.156 [2024-11-20 09:15:52.912660] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:14.156 [2024-11-20 09:15:52.912697] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:14.156 [2024-11-20 09:15:52.912715] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:14.156 [2024-11-20 09:15:52.912820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:14.156 [2024-11-20 09:15:52.912831] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:14.156 [2024-11-20 09:15:52.912842] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:14.156 [2024-11-20 09:15:52.912853] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:14.156 [2024-11-20 09:15:52.912862] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:14.156 [2024-11-20 09:15:52.912881] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:14.156 [2024-11-20 09:15:52.912890] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:14.156 [2024-11-20 09:15:52.912898] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:14.156 [2024-11-20 09:15:52.912906] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:14.156 [2024-11-20 09:15:52.912917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.156 [2024-11-20 09:15:52.912925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:14.156 [2024-11-20 09:15:52.912934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:22:14.156 [2024-11-20 09:15:52.912941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.913026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.156 [2024-11-20 09:15:52.913034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:14.156 [2024-11-20 09:15:52.913043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:14.156 [2024-11-20 09:15:52.913050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.156 [2024-11-20 09:15:52.913192] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:14.156 [2024-11-20 09:15:52.913212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:14.156 [2024-11-20 09:15:52.913222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:14.156 [2024-11-20 09:15:52.913250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:14.156 [2024-11-20 09:15:52.913276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:14.156 [2024-11-20 09:15:52.913292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:14.156 [2024-11-20 09:15:52.913299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:14.156 [2024-11-20 09:15:52.913307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:14.156 [2024-11-20 09:15:52.913315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:14.156 [2024-11-20 09:15:52.913323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:14.156 [2024-11-20 09:15:52.913336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:14.156 [2024-11-20 09:15:52.913352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:14.156 [2024-11-20 09:15:52.913376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:14.156 [2024-11-20 09:15:52.913398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:14.156 [2024-11-20 09:15:52.913421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:14.156 [2024-11-20 09:15:52.913444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:14.156 [2024-11-20 09:15:52.913467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:14.156 [2024-11-20 09:15:52.913482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:14.156 [2024-11-20 09:15:52.913489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:14.156 [2024-11-20 09:15:52.913495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:14.156 [2024-11-20 09:15:52.913503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:14.156 [2024-11-20 09:15:52.913510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:14.156 [2024-11-20 09:15:52.913516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913523] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:14.156 [2024-11-20 09:15:52.913529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:14.156 [2024-11-20 09:15:52.913535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913541] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:14.156 [2024-11-20 09:15:52.913550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:14.156 [2024-11-20 09:15:52.913557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:14.156 [2024-11-20 09:15:52.913571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:14.156 [2024-11-20 09:15:52.913578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:14.156 [2024-11-20 09:15:52.913585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:14.156 [2024-11-20 09:15:52.913592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:14.156 [2024-11-20 09:15:52.913598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:14.156 [2024-11-20 09:15:52.913605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:14.156 [2024-11-20 09:15:52.913613] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:14.156 [2024-11-20 09:15:52.913622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:14.156 [2024-11-20 09:15:52.913631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:14.156 [2024-11-20 09:15:52.913638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:14.156 [2024-11-20 09:15:52.913646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:14.156 [2024-11-20 09:15:52.913652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:14.156 [2024-11-20 09:15:52.913659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:14.156 [2024-11-20 09:15:52.913666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:14.157 [2024-11-20 09:15:52.913673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:14.157 [2024-11-20 09:15:52.913680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:14.157 [2024-11-20 09:15:52.913687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:14.157 [2024-11-20 09:15:52.913694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:14.157 [2024-11-20 09:15:52.913701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:14.157 [2024-11-20 09:15:52.913708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:14.157 [2024-11-20 09:15:52.913715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:14.157 [2024-11-20 09:15:52.913723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:14.157 [2024-11-20 09:15:52.913730] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:14.157 [2024-11-20 09:15:52.913741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:14.157 [2024-11-20 09:15:52.913749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:14.157 [2024-11-20 09:15:52.913757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:14.157 [2024-11-20 09:15:52.913764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:14.157 [2024-11-20 09:15:52.913771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:14.157 [2024-11-20 09:15:52.913778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.913786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:14.157 [2024-11-20 09:15:52.913794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:22:14.157 [2024-11-20 09:15:52.913801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.940131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.940173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:14.157 [2024-11-20 09:15:52.940187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.285 ms 00:22:14.157 [2024-11-20 09:15:52.940194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.940299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.940307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:14.157 [2024-11-20 09:15:52.940316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:14.157 [2024-11-20 09:15:52.940323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.984422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.984471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:14.157 [2024-11-20 09:15:52.984486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.031 ms 00:22:14.157 [2024-11-20 09:15:52.984493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.984555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.984565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:14.157 [2024-11-20 09:15:52.984574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:14.157 [2024-11-20 09:15:52.984585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.985006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.985022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:14.157 [2024-11-20 09:15:52.985032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:22:14.157 [2024-11-20 09:15:52.985039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.985172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.985180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:14.157 [2024-11-20 09:15:52.985188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:14.157 [2024-11-20 09:15:52.985200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:52.998375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:52.998413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:14.157 [2024-11-20 09:15:52.998429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.156 ms 00:22:14.157 [2024-11-20 09:15:52.998437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:53.011379] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:14.157 [2024-11-20 09:15:53.011429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:14.157 [2024-11-20 09:15:53.011442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:53.011451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:14.157 [2024-11-20 09:15:53.011462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.882 ms 00:22:14.157 [2024-11-20 09:15:53.011469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:53.036969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:53.037046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:14.157 [2024-11-20 09:15:53.037059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.440 ms 00:22:14.157 [2024-11-20 09:15:53.037068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:53.049522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:53.049577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:14.157 [2024-11-20 09:15:53.049588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.388 ms 00:22:14.157 [2024-11-20 09:15:53.049596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:53.062267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:53.062498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:14.157 [2024-11-20 09:15:53.062519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.617 ms 00:22:14.157 [2024-11-20 09:15:53.062528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.157 [2024-11-20 09:15:53.063556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.157 [2024-11-20 09:15:53.063594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.157 [2024-11-20 09:15:53.063606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:22:14.157 [2024-11-20 09:15:53.063618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.121321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.121537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:14.414 [2024-11-20 09:15:53.121564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.677 ms 00:22:14.414 [2024-11-20 09:15:53.121573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.132793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:14.414 [2024-11-20 09:15:53.135814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.135855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:14.414 [2024-11-20 09:15:53.135868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.183 ms 00:22:14.414 [2024-11-20 09:15:53.135887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.136002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.136015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:14.414 [2024-11-20 09:15:53.136025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:14.414 [2024-11-20 09:15:53.136035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.137458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.137492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.414 [2024-11-20 09:15:53.137503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:22:14.414 [2024-11-20 09:15:53.137512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.137541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.137550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.414 [2024-11-20 09:15:53.137560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:14.414 [2024-11-20 09:15:53.137568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.137603] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:14.414 [2024-11-20 09:15:53.137616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.137625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:14.414 [2024-11-20 09:15:53.137634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:14.414 [2024-11-20 09:15:53.137643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.162237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.162289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.414 [2024-11-20 09:15:53.162302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.575 ms 00:22:14.414 [2024-11-20 09:15:53.162315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.162402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.414 [2024-11-20 09:15:53.162412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.414 [2024-11-20 09:15:53.162421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:14.414 [2024-11-20 09:15:53.162428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.414 [2024-11-20 09:15:53.163489] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 275.383 ms, result 0 00:22:15.781  [2024-11-20T09:15:55.632Z] Copying: 42/1024 [MB] (42 MBps) [2024-11-20T09:15:56.565Z] Copying: 87/1024 [MB] (44 MBps) [2024-11-20T09:15:57.497Z] Copying: 133/1024 [MB] (46 MBps) [2024-11-20T09:15:58.432Z] Copying: 180/1024 [MB] (46 MBps) [2024-11-20T09:15:59.434Z] Copying: 223/1024 [MB] (43 MBps) [2024-11-20T09:16:00.387Z] Copying: 247/1024 [MB] (23 MBps) [2024-11-20T09:16:01.767Z] Copying: 282/1024 [MB] (35 MBps) [2024-11-20T09:16:02.702Z] Copying: 305/1024 [MB] (22 MBps) [2024-11-20T09:16:03.644Z] Copying: 342/1024 [MB] (37 MBps) [2024-11-20T09:16:04.581Z] Copying: 359/1024 [MB] (17 MBps) [2024-11-20T09:16:05.521Z] Copying: 386/1024 [MB] (26 MBps) [2024-11-20T09:16:06.454Z] Copying: 412/1024 [MB] (26 MBps) [2024-11-20T09:16:07.388Z] Copying: 452/1024 [MB] (39 MBps) [2024-11-20T09:16:08.770Z] Copying: 496/1024 [MB] (43 MBps) [2024-11-20T09:16:09.710Z] Copying: 539/1024 [MB] (42 MBps) [2024-11-20T09:16:10.645Z] Copying: 584/1024 [MB] (44 MBps) [2024-11-20T09:16:11.577Z] Copying: 622/1024 [MB] (38 MBps) [2024-11-20T09:16:12.518Z] Copying: 668/1024 [MB] (45 MBps) [2024-11-20T09:16:13.459Z] Copying: 705/1024 [MB] (37 MBps) [2024-11-20T09:16:14.399Z] Copying: 727/1024 [MB] (22 MBps) [2024-11-20T09:16:15.773Z] Copying: 748/1024 [MB] (21 MBps) [2024-11-20T09:16:16.711Z] Copying: 793/1024 [MB] (44 MBps) [2024-11-20T09:16:17.648Z] Copying: 814/1024 [MB] (21 MBps) [2024-11-20T09:16:18.586Z] Copying: 849/1024 [MB] (34 MBps) [2024-11-20T09:16:19.530Z] Copying: 884/1024 [MB] (34 MBps) [2024-11-20T09:16:20.473Z] Copying: 906/1024 [MB] (22 MBps) [2024-11-20T09:16:21.416Z] Copying: 928/1024 [MB] (21 MBps) [2024-11-20T09:16:22.355Z] Copying: 940/1024 [MB] (12 MBps) [2024-11-20T09:16:23.758Z] Copying: 959/1024 [MB] (19 MBps) [2024-11-20T09:16:24.700Z] Copying: 972/1024 [MB] (12 MBps) [2024-11-20T09:16:25.707Z] Copying: 991/1024 [MB] (19 MBps) [2024-11-20T09:16:26.651Z] Copying: 1004/1024 [MB] (13 MBps) [2024-11-20T09:16:27.222Z] Copying: 1018/1024 [MB] (13 MBps) [2024-11-20T09:16:27.222Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 09:16:27.020820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.020913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:48.303 [2024-11-20 09:16:27.020940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:48.303 [2024-11-20 09:16:27.020957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.303 [2024-11-20 09:16:27.021002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:48.303 [2024-11-20 09:16:27.025099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.025288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:48.303 [2024-11-20 09:16:27.025320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.068 ms 00:22:48.303 [2024-11-20 09:16:27.025337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.303 [2024-11-20 09:16:27.026440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.026479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:48.303 [2024-11-20 09:16:27.026499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:22:48.303 [2024-11-20 09:16:27.026519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.303 [2024-11-20 09:16:27.032515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.032549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:48.303 [2024-11-20 09:16:27.032564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.958 ms 00:22:48.303 [2024-11-20 09:16:27.032575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.303 [2024-11-20 09:16:27.038839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.038888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:48.303 [2024-11-20 09:16:27.038904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:22:48.303 [2024-11-20 09:16:27.038919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.303 [2024-11-20 09:16:27.062991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.063034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:48.303 [2024-11-20 09:16:27.063054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.010 ms 00:22:48.303 [2024-11-20 09:16:27.063067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.303 [2024-11-20 09:16:27.076783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.303 [2024-11-20 09:16:27.076828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:48.303 [2024-11-20 09:16:27.076847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.669 ms 00:22:48.303 [2024-11-20 09:16:27.076858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.878 [2024-11-20 09:16:27.515299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.878 [2024-11-20 09:16:27.515359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:48.878 [2024-11-20 09:16:27.515380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 438.367 ms 00:22:48.878 [2024-11-20 09:16:27.515392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.878 [2024-11-20 09:16:27.540364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.878 [2024-11-20 09:16:27.540527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:48.878 [2024-11-20 09:16:27.540550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.950 ms 00:22:48.878 [2024-11-20 09:16:27.540561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.878 [2024-11-20 09:16:27.564144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.878 [2024-11-20 09:16:27.564185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:48.878 [2024-11-20 09:16:27.564214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.541 ms 00:22:48.878 [2024-11-20 09:16:27.564226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.878 [2024-11-20 09:16:27.586982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.878 [2024-11-20 09:16:27.587112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:48.878 [2024-11-20 09:16:27.587134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.713 ms 00:22:48.878 [2024-11-20 09:16:27.587146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.878 [2024-11-20 09:16:27.609939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.878 [2024-11-20 09:16:27.610064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:48.878 [2024-11-20 09:16:27.610084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.726 ms 00:22:48.878 [2024-11-20 09:16:27.610095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.878 [2024-11-20 09:16:27.610132] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:48.878 [2024-11-20 09:16:27.610156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:48.878 [2024-11-20 09:16:27.610172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.610998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.611011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.611024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.611037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:48.878 [2024-11-20 09:16:27.611051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:48.879 [2024-11-20 09:16:27.611523] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:48.879 [2024-11-20 09:16:27.611537] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ace07a7f-bff5-45b2-a4fb-6c01762c9936 00:22:48.879 [2024-11-20 09:16:27.611552] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:48.879 [2024-11-20 09:16:27.611565] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 9664 00:22:48.879 [2024-11-20 09:16:27.611579] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 8704 00:22:48.879 [2024-11-20 09:16:27.611594] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1103 00:22:48.879 [2024-11-20 09:16:27.611608] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:48.879 [2024-11-20 09:16:27.611636] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:48.879 [2024-11-20 09:16:27.611651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:48.879 [2024-11-20 09:16:27.611673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:48.879 [2024-11-20 09:16:27.611685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:48.879 [2024-11-20 09:16:27.611700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.879 [2024-11-20 09:16:27.611714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:48.879 [2024-11-20 09:16:27.611729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.567 ms 00:22:48.879 [2024-11-20 09:16:27.611742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.626120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.879 [2024-11-20 09:16:27.626154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:48.879 [2024-11-20 09:16:27.626170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.349 ms 00:22:48.879 [2024-11-20 09:16:27.626188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.626636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:48.879 [2024-11-20 09:16:27.626667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:48.879 [2024-11-20 09:16:27.626681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:22:48.879 [2024-11-20 09:16:27.626693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.659152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.879 [2024-11-20 09:16:27.659188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:48.879 [2024-11-20 09:16:27.659209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.879 [2024-11-20 09:16:27.659220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.659299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.879 [2024-11-20 09:16:27.659316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:48.879 [2024-11-20 09:16:27.659331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.879 [2024-11-20 09:16:27.659344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.659447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.879 [2024-11-20 09:16:27.659463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:48.879 [2024-11-20 09:16:27.659478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.879 [2024-11-20 09:16:27.659497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.659522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.879 [2024-11-20 09:16:27.659538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:48.879 [2024-11-20 09:16:27.659553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.879 [2024-11-20 09:16:27.659566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:48.879 [2024-11-20 09:16:27.735649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:48.879 [2024-11-20 09:16:27.735701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:48.879 [2024-11-20 09:16:27.735723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:48.879 [2024-11-20 09:16:27.735734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.797928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.797979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:49.141 [2024-11-20 09:16:27.797995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.798087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.141 [2024-11-20 09:16:27.798099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.798192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.141 [2024-11-20 09:16:27.798205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.798359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.141 [2024-11-20 09:16:27.798372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.798454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:49.141 [2024-11-20 09:16:27.798467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.798543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.141 [2024-11-20 09:16:27.798556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:49.141 [2024-11-20 09:16:27.798639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.141 [2024-11-20 09:16:27.798652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:49.141 [2024-11-20 09:16:27.798664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.141 [2024-11-20 09:16:27.798817] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 777.959 ms, result 0 00:22:49.714 00:22:49.714 00:22:49.714 09:16:28 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:52.262 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:52.262 09:16:30 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:52.262 09:16:30 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:52.262 09:16:30 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:52.262 09:16:30 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:52.262 09:16:30 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:52.263 Process with pid 74853 is not found 00:22:52.263 Remove shared memory files 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74853 00:22:52.263 09:16:30 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 74853 ']' 00:22:52.263 09:16:30 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 74853 00:22:52.263 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74853) - No such process 00:22:52.263 09:16:30 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 74853 is not found' 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:52.263 09:16:30 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:52.263 ************************************ 00:22:52.263 END TEST ftl_restore 00:22:52.263 ************************************ 00:22:52.263 00:22:52.263 real 3m47.825s 00:22:52.263 user 3m35.913s 00:22:52.263 sys 0m12.613s 00:22:52.263 09:16:30 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.263 09:16:30 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:52.263 09:16:30 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:52.263 09:16:30 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:52.263 09:16:30 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.263 09:16:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:52.263 ************************************ 00:22:52.263 START TEST ftl_dirty_shutdown 00:22:52.263 ************************************ 00:22:52.263 09:16:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:52.263 * Looking for test storage... 00:22:52.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:52.263 09:16:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:52.263 09:16:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:22:52.263 09:16:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.263 --rc genhtml_branch_coverage=1 00:22:52.263 --rc genhtml_function_coverage=1 00:22:52.263 --rc genhtml_legend=1 00:22:52.263 --rc geninfo_all_blocks=1 00:22:52.263 --rc geninfo_unexecuted_blocks=1 00:22:52.263 00:22:52.263 ' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.263 --rc genhtml_branch_coverage=1 00:22:52.263 --rc genhtml_function_coverage=1 00:22:52.263 --rc genhtml_legend=1 00:22:52.263 --rc geninfo_all_blocks=1 00:22:52.263 --rc geninfo_unexecuted_blocks=1 00:22:52.263 00:22:52.263 ' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.263 --rc genhtml_branch_coverage=1 00:22:52.263 --rc genhtml_function_coverage=1 00:22:52.263 --rc genhtml_legend=1 00:22:52.263 --rc geninfo_all_blocks=1 00:22:52.263 --rc geninfo_unexecuted_blocks=1 00:22:52.263 00:22:52.263 ' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:52.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.263 --rc genhtml_branch_coverage=1 00:22:52.263 --rc genhtml_function_coverage=1 00:22:52.263 --rc genhtml_legend=1 00:22:52.263 --rc geninfo_all_blocks=1 00:22:52.263 --rc geninfo_unexecuted_blocks=1 00:22:52.263 00:22:52.263 ' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:52.263 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77305 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77305 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 77305 ']' 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.264 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:52.264 [2024-11-20 09:16:31.126778] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:22:52.264 [2024-11-20 09:16:31.127060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77305 ] 00:22:52.525 [2024-11-20 09:16:31.289136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.525 [2024-11-20 09:16:31.390148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:53.096 09:16:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:53.372 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.634 { 00:22:53.634 "name": "nvme0n1", 00:22:53.634 "aliases": [ 00:22:53.634 "2503fb50-8564-4c7b-98b5-be8f32a32091" 00:22:53.634 ], 00:22:53.634 "product_name": "NVMe disk", 00:22:53.634 "block_size": 4096, 00:22:53.634 "num_blocks": 1310720, 00:22:53.634 "uuid": "2503fb50-8564-4c7b-98b5-be8f32a32091", 00:22:53.634 "numa_id": -1, 00:22:53.634 "assigned_rate_limits": { 00:22:53.634 "rw_ios_per_sec": 0, 00:22:53.634 "rw_mbytes_per_sec": 0, 00:22:53.634 "r_mbytes_per_sec": 0, 00:22:53.634 "w_mbytes_per_sec": 0 00:22:53.634 }, 00:22:53.634 "claimed": true, 00:22:53.634 "claim_type": "read_many_write_one", 00:22:53.634 "zoned": false, 00:22:53.634 "supported_io_types": { 00:22:53.634 "read": true, 00:22:53.634 "write": true, 00:22:53.634 "unmap": true, 00:22:53.634 "flush": true, 00:22:53.634 "reset": true, 00:22:53.634 "nvme_admin": true, 00:22:53.634 "nvme_io": true, 00:22:53.634 "nvme_io_md": false, 00:22:53.634 "write_zeroes": true, 00:22:53.634 "zcopy": false, 00:22:53.634 "get_zone_info": false, 00:22:53.634 "zone_management": false, 00:22:53.634 "zone_append": false, 00:22:53.634 "compare": true, 00:22:53.634 "compare_and_write": false, 00:22:53.634 "abort": true, 00:22:53.634 "seek_hole": false, 00:22:53.634 "seek_data": false, 00:22:53.634 "copy": true, 00:22:53.634 "nvme_iov_md": false 00:22:53.634 }, 00:22:53.634 "driver_specific": { 00:22:53.634 "nvme": [ 00:22:53.634 { 00:22:53.634 "pci_address": "0000:00:11.0", 00:22:53.634 "trid": { 00:22:53.634 "trtype": "PCIe", 00:22:53.634 "traddr": "0000:00:11.0" 00:22:53.634 }, 00:22:53.634 "ctrlr_data": { 00:22:53.634 "cntlid": 0, 00:22:53.634 "vendor_id": "0x1b36", 00:22:53.634 "model_number": "QEMU NVMe Ctrl", 00:22:53.634 "serial_number": "12341", 00:22:53.634 "firmware_revision": "8.0.0", 00:22:53.634 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:53.634 "oacs": { 00:22:53.634 "security": 0, 00:22:53.634 "format": 1, 00:22:53.634 "firmware": 0, 00:22:53.634 "ns_manage": 1 00:22:53.634 }, 00:22:53.634 "multi_ctrlr": false, 00:22:53.634 "ana_reporting": false 00:22:53.634 }, 00:22:53.634 "vs": { 00:22:53.634 "nvme_version": "1.4" 00:22:53.634 }, 00:22:53.634 "ns_data": { 00:22:53.634 "id": 1, 00:22:53.634 "can_share": false 00:22:53.634 } 00:22:53.634 } 00:22:53.634 ], 00:22:53.634 "mp_policy": "active_passive" 00:22:53.634 } 00:22:53.634 } 00:22:53.634 ]' 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:53.634 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:53.893 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:53.893 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:53.893 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=f9f86486-3523-4211-9f51-14b80baef7e3 00:22:53.893 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:53.893 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9f86486-3523-4211-9f51-14b80baef7e3 00:22:54.154 09:16:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:54.415 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=0094dd7a-2931-46f2-a77b-8be245323edb 00:22:54.415 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0094dd7a-2931-46f2-a77b-8be245323edb 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:54.676 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:54.677 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:54.677 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:54.677 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:54.677 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:54.938 { 00:22:54.938 "name": "ac72b24a-9ce7-4938-90f4-6dc2770ec535", 00:22:54.938 "aliases": [ 00:22:54.938 "lvs/nvme0n1p0" 00:22:54.938 ], 00:22:54.938 "product_name": "Logical Volume", 00:22:54.938 "block_size": 4096, 00:22:54.938 "num_blocks": 26476544, 00:22:54.938 "uuid": "ac72b24a-9ce7-4938-90f4-6dc2770ec535", 00:22:54.938 "assigned_rate_limits": { 00:22:54.938 "rw_ios_per_sec": 0, 00:22:54.938 "rw_mbytes_per_sec": 0, 00:22:54.938 "r_mbytes_per_sec": 0, 00:22:54.938 "w_mbytes_per_sec": 0 00:22:54.938 }, 00:22:54.938 "claimed": false, 00:22:54.938 "zoned": false, 00:22:54.938 "supported_io_types": { 00:22:54.938 "read": true, 00:22:54.938 "write": true, 00:22:54.938 "unmap": true, 00:22:54.938 "flush": false, 00:22:54.938 "reset": true, 00:22:54.938 "nvme_admin": false, 00:22:54.938 "nvme_io": false, 00:22:54.938 "nvme_io_md": false, 00:22:54.938 "write_zeroes": true, 00:22:54.938 "zcopy": false, 00:22:54.938 "get_zone_info": false, 00:22:54.938 "zone_management": false, 00:22:54.938 "zone_append": false, 00:22:54.938 "compare": false, 00:22:54.938 "compare_and_write": false, 00:22:54.938 "abort": false, 00:22:54.938 "seek_hole": true, 00:22:54.938 "seek_data": true, 00:22:54.938 "copy": false, 00:22:54.938 "nvme_iov_md": false 00:22:54.938 }, 00:22:54.938 "driver_specific": { 00:22:54.938 "lvol": { 00:22:54.938 "lvol_store_uuid": "0094dd7a-2931-46f2-a77b-8be245323edb", 00:22:54.938 "base_bdev": "nvme0n1", 00:22:54.938 "thin_provision": true, 00:22:54.938 "num_allocated_clusters": 0, 00:22:54.938 "snapshot": false, 00:22:54.938 "clone": false, 00:22:54.938 "esnap_clone": false 00:22:54.938 } 00:22:54.938 } 00:22:54.938 } 00:22:54.938 ]' 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:54.938 09:16:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:55.199 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:55.460 { 00:22:55.460 "name": "ac72b24a-9ce7-4938-90f4-6dc2770ec535", 00:22:55.460 "aliases": [ 00:22:55.460 "lvs/nvme0n1p0" 00:22:55.460 ], 00:22:55.460 "product_name": "Logical Volume", 00:22:55.460 "block_size": 4096, 00:22:55.460 "num_blocks": 26476544, 00:22:55.460 "uuid": "ac72b24a-9ce7-4938-90f4-6dc2770ec535", 00:22:55.460 "assigned_rate_limits": { 00:22:55.460 "rw_ios_per_sec": 0, 00:22:55.460 "rw_mbytes_per_sec": 0, 00:22:55.460 "r_mbytes_per_sec": 0, 00:22:55.460 "w_mbytes_per_sec": 0 00:22:55.460 }, 00:22:55.460 "claimed": false, 00:22:55.460 "zoned": false, 00:22:55.460 "supported_io_types": { 00:22:55.460 "read": true, 00:22:55.460 "write": true, 00:22:55.460 "unmap": true, 00:22:55.460 "flush": false, 00:22:55.460 "reset": true, 00:22:55.460 "nvme_admin": false, 00:22:55.460 "nvme_io": false, 00:22:55.460 "nvme_io_md": false, 00:22:55.460 "write_zeroes": true, 00:22:55.460 "zcopy": false, 00:22:55.460 "get_zone_info": false, 00:22:55.460 "zone_management": false, 00:22:55.460 "zone_append": false, 00:22:55.460 "compare": false, 00:22:55.460 "compare_and_write": false, 00:22:55.460 "abort": false, 00:22:55.460 "seek_hole": true, 00:22:55.460 "seek_data": true, 00:22:55.460 "copy": false, 00:22:55.460 "nvme_iov_md": false 00:22:55.460 }, 00:22:55.460 "driver_specific": { 00:22:55.460 "lvol": { 00:22:55.460 "lvol_store_uuid": "0094dd7a-2931-46f2-a77b-8be245323edb", 00:22:55.460 "base_bdev": "nvme0n1", 00:22:55.460 "thin_provision": true, 00:22:55.460 "num_allocated_clusters": 0, 00:22:55.460 "snapshot": false, 00:22:55.460 "clone": false, 00:22:55.460 "esnap_clone": false 00:22:55.460 } 00:22:55.460 } 00:22:55.460 } 00:22:55.460 ]' 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:22:55.460 09:16:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:22:55.722 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ac72b24a-9ce7-4938-90f4-6dc2770ec535 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:55.984 { 00:22:55.984 "name": "ac72b24a-9ce7-4938-90f4-6dc2770ec535", 00:22:55.984 "aliases": [ 00:22:55.984 "lvs/nvme0n1p0" 00:22:55.984 ], 00:22:55.984 "product_name": "Logical Volume", 00:22:55.984 "block_size": 4096, 00:22:55.984 "num_blocks": 26476544, 00:22:55.984 "uuid": "ac72b24a-9ce7-4938-90f4-6dc2770ec535", 00:22:55.984 "assigned_rate_limits": { 00:22:55.984 "rw_ios_per_sec": 0, 00:22:55.984 "rw_mbytes_per_sec": 0, 00:22:55.984 "r_mbytes_per_sec": 0, 00:22:55.984 "w_mbytes_per_sec": 0 00:22:55.984 }, 00:22:55.984 "claimed": false, 00:22:55.984 "zoned": false, 00:22:55.984 "supported_io_types": { 00:22:55.984 "read": true, 00:22:55.984 "write": true, 00:22:55.984 "unmap": true, 00:22:55.984 "flush": false, 00:22:55.984 "reset": true, 00:22:55.984 "nvme_admin": false, 00:22:55.984 "nvme_io": false, 00:22:55.984 "nvme_io_md": false, 00:22:55.984 "write_zeroes": true, 00:22:55.984 "zcopy": false, 00:22:55.984 "get_zone_info": false, 00:22:55.984 "zone_management": false, 00:22:55.984 "zone_append": false, 00:22:55.984 "compare": false, 00:22:55.984 "compare_and_write": false, 00:22:55.984 "abort": false, 00:22:55.984 "seek_hole": true, 00:22:55.984 "seek_data": true, 00:22:55.984 "copy": false, 00:22:55.984 "nvme_iov_md": false 00:22:55.984 }, 00:22:55.984 "driver_specific": { 00:22:55.984 "lvol": { 00:22:55.984 "lvol_store_uuid": "0094dd7a-2931-46f2-a77b-8be245323edb", 00:22:55.984 "base_bdev": "nvme0n1", 00:22:55.984 "thin_provision": true, 00:22:55.984 "num_allocated_clusters": 0, 00:22:55.984 "snapshot": false, 00:22:55.984 "clone": false, 00:22:55.984 "esnap_clone": false 00:22:55.984 } 00:22:55.984 } 00:22:55.984 } 00:22:55.984 ]' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ac72b24a-9ce7-4938-90f4-6dc2770ec535 --l2p_dram_limit 10' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:55.984 09:16:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ac72b24a-9ce7-4938-90f4-6dc2770ec535 --l2p_dram_limit 10 -c nvc0n1p0 00:22:56.247 [2024-11-20 09:16:35.075583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.247 [2024-11-20 09:16:35.075657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:56.247 [2024-11-20 09:16:35.075679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.247 [2024-11-20 09:16:35.075689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.247 [2024-11-20 09:16:35.075769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.247 [2024-11-20 09:16:35.075780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.247 [2024-11-20 09:16:35.075792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:56.247 [2024-11-20 09:16:35.075801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.247 [2024-11-20 09:16:35.075830] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:56.247 [2024-11-20 09:16:35.076697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:56.247 [2024-11-20 09:16:35.076735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.247 [2024-11-20 09:16:35.076744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.247 [2024-11-20 09:16:35.076757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:22:56.247 [2024-11-20 09:16:35.076765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.247 [2024-11-20 09:16:35.076856] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 269fda64-7e9a-49c9-9426-a4f0275b0519 00:22:56.247 [2024-11-20 09:16:35.078740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.247 [2024-11-20 09:16:35.078967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:56.247 [2024-11-20 09:16:35.078992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:56.247 [2024-11-20 09:16:35.079006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.247 [2024-11-20 09:16:35.088981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.247 [2024-11-20 09:16:35.089035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.247 [2024-11-20 09:16:35.089050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.910 ms 00:22:56.247 [2024-11-20 09:16:35.089061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.247 [2024-11-20 09:16:35.089172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.248 [2024-11-20 09:16:35.089186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.248 [2024-11-20 09:16:35.089196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:22:56.248 [2024-11-20 09:16:35.089211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.248 [2024-11-20 09:16:35.089286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.248 [2024-11-20 09:16:35.089299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:56.248 [2024-11-20 09:16:35.089307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:56.248 [2024-11-20 09:16:35.089320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.248 [2024-11-20 09:16:35.089345] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.248 [2024-11-20 09:16:35.093946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.248 [2024-11-20 09:16:35.094140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.248 [2024-11-20 09:16:35.094169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.605 ms 00:22:56.248 [2024-11-20 09:16:35.094179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.248 [2024-11-20 09:16:35.094230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.248 [2024-11-20 09:16:35.094240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:56.248 [2024-11-20 09:16:35.094252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:56.248 [2024-11-20 09:16:35.094261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.248 [2024-11-20 09:16:35.094306] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:56.248 [2024-11-20 09:16:35.094460] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:56.248 [2024-11-20 09:16:35.094480] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:56.248 [2024-11-20 09:16:35.094493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:56.248 [2024-11-20 09:16:35.094507] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:56.248 [2024-11-20 09:16:35.094518] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:56.248 [2024-11-20 09:16:35.094530] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:56.248 [2024-11-20 09:16:35.094539] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:56.248 [2024-11-20 09:16:35.094551] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:56.248 [2024-11-20 09:16:35.094559] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:56.248 [2024-11-20 09:16:35.094570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.248 [2024-11-20 09:16:35.094579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:56.248 [2024-11-20 09:16:35.094589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:22:56.248 [2024-11-20 09:16:35.094606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.248 [2024-11-20 09:16:35.094694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.248 [2024-11-20 09:16:35.094703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:56.248 [2024-11-20 09:16:35.094714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:56.248 [2024-11-20 09:16:35.094721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.248 [2024-11-20 09:16:35.094829] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:56.248 [2024-11-20 09:16:35.094840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:56.248 [2024-11-20 09:16:35.094851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.248 [2024-11-20 09:16:35.094860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.094895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:56.248 [2024-11-20 09:16:35.094903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.094912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:56.248 [2024-11-20 09:16:35.094919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:56.248 [2024-11-20 09:16:35.094929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:56.248 [2024-11-20 09:16:35.094937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.248 [2024-11-20 09:16:35.094945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:56.248 [2024-11-20 09:16:35.094952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:56.248 [2024-11-20 09:16:35.094961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.248 [2024-11-20 09:16:35.094967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:56.248 [2024-11-20 09:16:35.094976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:56.248 [2024-11-20 09:16:35.094982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.094994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:56.248 [2024-11-20 09:16:35.095001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:56.248 [2024-11-20 09:16:35.095032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:56.248 [2024-11-20 09:16:35.095055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095071] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:56.248 [2024-11-20 09:16:35.095081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:56.248 [2024-11-20 09:16:35.095105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:56.248 [2024-11-20 09:16:35.095134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.248 [2024-11-20 09:16:35.095150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:56.248 [2024-11-20 09:16:35.095157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:56.248 [2024-11-20 09:16:35.095166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.248 [2024-11-20 09:16:35.095172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:56.248 [2024-11-20 09:16:35.095182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:56.248 [2024-11-20 09:16:35.095189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:56.248 [2024-11-20 09:16:35.095204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:56.248 [2024-11-20 09:16:35.095212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095219] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:56.248 [2024-11-20 09:16:35.095229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:56.248 [2024-11-20 09:16:35.095236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.248 [2024-11-20 09:16:35.095255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:56.248 [2024-11-20 09:16:35.095267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:56.248 [2024-11-20 09:16:35.095273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:56.248 [2024-11-20 09:16:35.095282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:56.248 [2024-11-20 09:16:35.095289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:56.248 [2024-11-20 09:16:35.095298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:56.248 [2024-11-20 09:16:35.095310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:56.248 [2024-11-20 09:16:35.095322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.248 [2024-11-20 09:16:35.095333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:56.248 [2024-11-20 09:16:35.095343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:56.248 [2024-11-20 09:16:35.095351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:56.248 [2024-11-20 09:16:35.095362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:56.248 [2024-11-20 09:16:35.095371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:56.248 [2024-11-20 09:16:35.095380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:56.248 [2024-11-20 09:16:35.095388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:56.248 [2024-11-20 09:16:35.095398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:56.248 [2024-11-20 09:16:35.095406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:56.248 [2024-11-20 09:16:35.095417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:56.249 [2024-11-20 09:16:35.095425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:56.249 [2024-11-20 09:16:35.095434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:56.249 [2024-11-20 09:16:35.095443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:56.249 [2024-11-20 09:16:35.095454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:56.249 [2024-11-20 09:16:35.095461] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:56.249 [2024-11-20 09:16:35.095473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.249 [2024-11-20 09:16:35.095482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:56.249 [2024-11-20 09:16:35.095492] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:56.249 [2024-11-20 09:16:35.095499] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:56.249 [2024-11-20 09:16:35.095509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:56.249 [2024-11-20 09:16:35.095517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.249 [2024-11-20 09:16:35.095528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:56.249 [2024-11-20 09:16:35.095535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:22:56.249 [2024-11-20 09:16:35.095545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.249 [2024-11-20 09:16:35.095587] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:56.249 [2024-11-20 09:16:35.095602] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:00.463 [2024-11-20 09:16:39.269838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.269943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:00.463 [2024-11-20 09:16:39.269962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4174.230 ms 00:23:00.463 [2024-11-20 09:16:39.269975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.304052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.304139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.463 [2024-11-20 09:16:39.304160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.718 ms 00:23:00.463 [2024-11-20 09:16:39.304176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.304385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.304406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:00.463 [2024-11-20 09:16:39.304417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:00.463 [2024-11-20 09:16:39.304431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.341532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.341797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.463 [2024-11-20 09:16:39.341821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.054 ms 00:23:00.463 [2024-11-20 09:16:39.341833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.341897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.341916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.463 [2024-11-20 09:16:39.341925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:00.463 [2024-11-20 09:16:39.341936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.342562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.342606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.463 [2024-11-20 09:16:39.342617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:23:00.463 [2024-11-20 09:16:39.342627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.342750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.342762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.463 [2024-11-20 09:16:39.342774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:00.463 [2024-11-20 09:16:39.342787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.361082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.361141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.463 [2024-11-20 09:16:39.361154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.274 ms 00:23:00.463 [2024-11-20 09:16:39.361166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.463 [2024-11-20 09:16:39.374964] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:00.463 [2024-11-20 09:16:39.379078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.463 [2024-11-20 09:16:39.379127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.463 [2024-11-20 09:16:39.379143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.787 ms 00:23:00.463 [2024-11-20 09:16:39.379153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.726 [2024-11-20 09:16:39.489717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.726 [2024-11-20 09:16:39.489805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:00.726 [2024-11-20 09:16:39.489828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.515 ms 00:23:00.726 [2024-11-20 09:16:39.489838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.726 [2024-11-20 09:16:39.490088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.726 [2024-11-20 09:16:39.490106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.726 [2024-11-20 09:16:39.490122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:23:00.726 [2024-11-20 09:16:39.490130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.726 [2024-11-20 09:16:39.518493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.726 [2024-11-20 09:16:39.518556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:00.726 [2024-11-20 09:16:39.518574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.292 ms 00:23:00.726 [2024-11-20 09:16:39.518583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.726 [2024-11-20 09:16:39.546399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.726 [2024-11-20 09:16:39.546460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:00.726 [2024-11-20 09:16:39.546478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.742 ms 00:23:00.726 [2024-11-20 09:16:39.546487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.726 [2024-11-20 09:16:39.547164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.726 [2024-11-20 09:16:39.547186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:00.726 [2024-11-20 09:16:39.547198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:23:00.726 [2024-11-20 09:16:39.547208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.726 [2024-11-20 09:16:39.635023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.726 [2024-11-20 09:16:39.635099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:00.726 [2024-11-20 09:16:39.635125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.736 ms 00:23:00.726 [2024-11-20 09:16:39.635134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.988 [2024-11-20 09:16:39.664079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.988 [2024-11-20 09:16:39.664396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:00.988 [2024-11-20 09:16:39.664432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.815 ms 00:23:00.988 [2024-11-20 09:16:39.664445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.988 [2024-11-20 09:16:39.692719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.988 [2024-11-20 09:16:39.692803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:00.988 [2024-11-20 09:16:39.692822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.200 ms 00:23:00.988 [2024-11-20 09:16:39.692832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.988 [2024-11-20 09:16:39.721073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.988 [2024-11-20 09:16:39.721144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.988 [2024-11-20 09:16:39.721163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.137 ms 00:23:00.988 [2024-11-20 09:16:39.721172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.988 [2024-11-20 09:16:39.721241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.988 [2024-11-20 09:16:39.721251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.988 [2024-11-20 09:16:39.721268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:00.988 [2024-11-20 09:16:39.721277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.988 [2024-11-20 09:16:39.721401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.988 [2024-11-20 09:16:39.721413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.989 [2024-11-20 09:16:39.721429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:00.989 [2024-11-20 09:16:39.721437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.989 [2024-11-20 09:16:39.722896] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4646.675 ms, result 0 00:23:00.989 { 00:23:00.989 "name": "ftl0", 00:23:00.989 "uuid": "269fda64-7e9a-49c9-9426-a4f0275b0519" 00:23:00.989 } 00:23:00.989 09:16:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:00.989 09:16:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:01.284 09:16:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:01.284 09:16:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:01.284 09:16:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:01.284 /dev/nbd0 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:01.544 1+0 records in 00:23:01.544 1+0 records out 00:23:01.544 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710113 s, 5.8 MB/s 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:01.544 09:16:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:01.544 [2024-11-20 09:16:40.313640] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:01.544 [2024-11-20 09:16:40.313787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77458 ] 00:23:01.805 [2024-11-20 09:16:40.478571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.805 [2024-11-20 09:16:40.612563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:03.191  [2024-11-20T09:16:43.051Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-20T09:16:43.997Z] Copying: 382/1024 [MB] (192 MBps) [2024-11-20T09:16:44.943Z] Copying: 572/1024 [MB] (189 MBps) [2024-11-20T09:16:45.884Z] Copying: 760/1024 [MB] (187 MBps) [2024-11-20T09:16:46.455Z] Copying: 946/1024 [MB] (185 MBps) [2024-11-20T09:16:47.412Z] Copying: 1024/1024 [MB] (average 188 MBps) 00:23:08.493 00:23:08.493 09:16:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:11.046 09:16:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:11.046 [2024-11-20 09:16:49.414844] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:11.046 [2024-11-20 09:16:49.415208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77556 ] 00:23:11.046 [2024-11-20 09:16:49.570680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.046 [2024-11-20 09:16:49.690724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.432  [2024-11-20T09:16:51.922Z] Copying: 14/1024 [MB] (14 MBps) [2024-11-20T09:16:53.305Z] Copying: 29/1024 [MB] (14 MBps) [2024-11-20T09:16:54.279Z] Copying: 42/1024 [MB] (13 MBps) [2024-11-20T09:16:55.243Z] Copying: 55/1024 [MB] (12 MBps) [2024-11-20T09:16:56.185Z] Copying: 70/1024 [MB] (15 MBps) [2024-11-20T09:16:57.121Z] Copying: 87/1024 [MB] (16 MBps) [2024-11-20T09:16:58.055Z] Copying: 113/1024 [MB] (25 MBps) [2024-11-20T09:16:58.989Z] Copying: 141/1024 [MB] (28 MBps) [2024-11-20T09:16:59.922Z] Copying: 171/1024 [MB] (29 MBps) [2024-11-20T09:17:01.295Z] Copying: 201/1024 [MB] (29 MBps) [2024-11-20T09:17:02.229Z] Copying: 232/1024 [MB] (30 MBps) [2024-11-20T09:17:03.162Z] Copying: 262/1024 [MB] (30 MBps) [2024-11-20T09:17:04.092Z] Copying: 292/1024 [MB] (30 MBps) [2024-11-20T09:17:05.023Z] Copying: 325/1024 [MB] (32 MBps) [2024-11-20T09:17:05.956Z] Copying: 360/1024 [MB] (35 MBps) [2024-11-20T09:17:06.949Z] Copying: 391/1024 [MB] (31 MBps) [2024-11-20T09:17:08.321Z] Copying: 421/1024 [MB] (30 MBps) [2024-11-20T09:17:09.254Z] Copying: 451/1024 [MB] (29 MBps) [2024-11-20T09:17:10.184Z] Copying: 479/1024 [MB] (28 MBps) [2024-11-20T09:17:11.113Z] Copying: 509/1024 [MB] (30 MBps) [2024-11-20T09:17:12.073Z] Copying: 539/1024 [MB] (30 MBps) [2024-11-20T09:17:13.006Z] Copying: 572/1024 [MB] (32 MBps) [2024-11-20T09:17:13.939Z] Copying: 602/1024 [MB] (29 MBps) [2024-11-20T09:17:15.311Z] Copying: 630/1024 [MB] (27 MBps) [2024-11-20T09:17:16.244Z] Copying: 660/1024 [MB] (30 MBps) [2024-11-20T09:17:17.176Z] Copying: 690/1024 [MB] (29 MBps) [2024-11-20T09:17:18.109Z] Copying: 720/1024 [MB] (30 MBps) [2024-11-20T09:17:19.062Z] Copying: 749/1024 [MB] (29 MBps) [2024-11-20T09:17:19.994Z] Copying: 783/1024 [MB] (33 MBps) [2024-11-20T09:17:20.926Z] Copying: 814/1024 [MB] (30 MBps) [2024-11-20T09:17:22.383Z] Copying: 844/1024 [MB] (30 MBps) [2024-11-20T09:17:22.949Z] Copying: 874/1024 [MB] (30 MBps) [2024-11-20T09:17:24.324Z] Copying: 910/1024 [MB] (35 MBps) [2024-11-20T09:17:25.257Z] Copying: 942/1024 [MB] (31 MBps) [2024-11-20T09:17:26.191Z] Copying: 972/1024 [MB] (30 MBps) [2024-11-20T09:17:26.758Z] Copying: 1002/1024 [MB] (30 MBps) [2024-11-20T09:17:27.324Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:23:48.405 00:23:48.405 09:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:48.405 09:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:48.664 09:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:48.923 [2024-11-20 09:17:27.588815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.588886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:48.923 [2024-11-20 09:17:27.588900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:48.923 [2024-11-20 09:17:27.588911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.588935] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:48.923 [2024-11-20 09:17:27.591517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.591547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:48.923 [2024-11-20 09:17:27.591560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:23:48.923 [2024-11-20 09:17:27.591569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.593368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.593397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:48.923 [2024-11-20 09:17:27.593408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.768 ms 00:23:48.923 [2024-11-20 09:17:27.593416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.608591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.608624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:48.923 [2024-11-20 09:17:27.608636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.155 ms 00:23:48.923 [2024-11-20 09:17:27.608644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.614853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.614993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:48.923 [2024-11-20 09:17:27.615014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.175 ms 00:23:48.923 [2024-11-20 09:17:27.615023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.638039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.638172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:48.923 [2024-11-20 09:17:27.638191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.948 ms 00:23:48.923 [2024-11-20 09:17:27.638198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.653617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.653765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:48.923 [2024-11-20 09:17:27.653788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.106 ms 00:23:48.923 [2024-11-20 09:17:27.653799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.653966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.653979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:48.923 [2024-11-20 09:17:27.653990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:48.923 [2024-11-20 09:17:27.653997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.676751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.676897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:48.923 [2024-11-20 09:17:27.676917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.734 ms 00:23:48.923 [2024-11-20 09:17:27.676925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.699013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.699046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:48.923 [2024-11-20 09:17:27.699057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.054 ms 00:23:48.923 [2024-11-20 09:17:27.699065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.720748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.720878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:48.923 [2024-11-20 09:17:27.720896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.644 ms 00:23:48.923 [2024-11-20 09:17:27.720903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.743265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.923 [2024-11-20 09:17:27.743299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:48.923 [2024-11-20 09:17:27.743311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.289 ms 00:23:48.923 [2024-11-20 09:17:27.743319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.923 [2024-11-20 09:17:27.743354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:48.923 [2024-11-20 09:17:27.743369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:48.923 [2024-11-20 09:17:27.743581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.743999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:48.924 [2024-11-20 09:17:27.744277] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:48.924 [2024-11-20 09:17:27.744291] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 269fda64-7e9a-49c9-9426-a4f0275b0519 00:23:48.924 [2024-11-20 09:17:27.744299] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:48.924 [2024-11-20 09:17:27.744309] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:48.924 [2024-11-20 09:17:27.744316] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:48.924 [2024-11-20 09:17:27.744327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:48.924 [2024-11-20 09:17:27.744334] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:48.924 [2024-11-20 09:17:27.744343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:48.924 [2024-11-20 09:17:27.744349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:48.924 [2024-11-20 09:17:27.744357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:48.924 [2024-11-20 09:17:27.744363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:48.924 [2024-11-20 09:17:27.744372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.924 [2024-11-20 09:17:27.744379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:48.924 [2024-11-20 09:17:27.744389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.019 ms 00:23:48.924 [2024-11-20 09:17:27.744396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.924 [2024-11-20 09:17:27.757042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.925 [2024-11-20 09:17:27.757070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:48.925 [2024-11-20 09:17:27.757086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.613 ms 00:23:48.925 [2024-11-20 09:17:27.757095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.925 [2024-11-20 09:17:27.757439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.925 [2024-11-20 09:17:27.757446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:48.925 [2024-11-20 09:17:27.757456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:23:48.925 [2024-11-20 09:17:27.757463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.925 [2024-11-20 09:17:27.799128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.925 [2024-11-20 09:17:27.799284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.925 [2024-11-20 09:17:27.799303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.925 [2024-11-20 09:17:27.799311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.925 [2024-11-20 09:17:27.799377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.925 [2024-11-20 09:17:27.799385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.925 [2024-11-20 09:17:27.799394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.925 [2024-11-20 09:17:27.799402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.925 [2024-11-20 09:17:27.799480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.925 [2024-11-20 09:17:27.799490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.925 [2024-11-20 09:17:27.799502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.925 [2024-11-20 09:17:27.799509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.925 [2024-11-20 09:17:27.799530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:48.925 [2024-11-20 09:17:27.799537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.925 [2024-11-20 09:17:27.799546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:48.925 [2024-11-20 09:17:27.799553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.878172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.878353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.183 [2024-11-20 09:17:27.878373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.878381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.941497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.941659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.183 [2024-11-20 09:17:27.941677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.941685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.941780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.941790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.183 [2024-11-20 09:17:27.941800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.941809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.941857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.941866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.183 [2024-11-20 09:17:27.941899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.941906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.941996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.942005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.183 [2024-11-20 09:17:27.942016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.942023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.942061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.942070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:49.183 [2024-11-20 09:17:27.942079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.942087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.942122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.942131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.183 [2024-11-20 09:17:27.942141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.942148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.942194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:49.183 [2024-11-20 09:17:27.942203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.183 [2024-11-20 09:17:27.942213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:49.183 [2024-11-20 09:17:27.942220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.183 [2024-11-20 09:17:27.942344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 353.499 ms, result 0 00:23:49.183 true 00:23:49.183 09:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77305 00:23:49.183 09:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77305 00:23:49.183 09:17:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:49.183 [2024-11-20 09:17:28.031213] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:49.183 [2024-11-20 09:17:28.031337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77958 ] 00:23:49.443 [2024-11-20 09:17:28.198865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.443 [2024-11-20 09:17:28.300115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.817  [2024-11-20T09:17:30.670Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-20T09:17:31.604Z] Copying: 392/1024 [MB] (196 MBps) [2024-11-20T09:17:32.536Z] Copying: 609/1024 [MB] (217 MBps) [2024-11-20T09:17:33.467Z] Copying: 860/1024 [MB] (250 MBps) [2024-11-20T09:17:34.033Z] Copying: 1024/1024 [MB] (average 218 MBps) 00:23:55.114 00:23:55.114 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77305 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:55.114 09:17:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:55.115 [2024-11-20 09:17:33.851668] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:23:55.115 [2024-11-20 09:17:33.852001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78023 ] 00:23:55.115 [2024-11-20 09:17:34.006983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.373 [2024-11-20 09:17:34.091849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.632 [2024-11-20 09:17:34.306816] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:55.632 [2024-11-20 09:17:34.307036] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:55.632 [2024-11-20 09:17:34.370675] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:55.632 [2024-11-20 09:17:34.371288] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:55.632 [2024-11-20 09:17:34.371521] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:55.892 [2024-11-20 09:17:34.551149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.551344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:55.892 [2024-11-20 09:17:34.551413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:55.892 [2024-11-20 09:17:34.551436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.551509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.551569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:55.892 [2024-11-20 09:17:34.551592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:55.892 [2024-11-20 09:17:34.551611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.551716] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:55.892 [2024-11-20 09:17:34.552484] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:55.892 [2024-11-20 09:17:34.552575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.552634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:55.892 [2024-11-20 09:17:34.552657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:23:55.892 [2024-11-20 09:17:34.552675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.553744] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:55.892 [2024-11-20 09:17:34.565998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.566120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:55.892 [2024-11-20 09:17:34.566171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:23:55.892 [2024-11-20 09:17:34.566194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.566264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.566289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:55.892 [2024-11-20 09:17:34.566309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:55.892 [2024-11-20 09:17:34.566327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.571052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.571150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:55.892 [2024-11-20 09:17:34.571206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.654 ms 00:23:55.892 [2024-11-20 09:17:34.571227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.571310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.571331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:55.892 [2024-11-20 09:17:34.571350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:55.892 [2024-11-20 09:17:34.571368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.571421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.571501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:55.892 [2024-11-20 09:17:34.571543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:55.892 [2024-11-20 09:17:34.571560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.571593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:55.892 [2024-11-20 09:17:34.574957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.575048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:55.892 [2024-11-20 09:17:34.575098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.368 ms 00:23:55.892 [2024-11-20 09:17:34.575119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.575164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.892 [2024-11-20 09:17:34.575211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:55.892 [2024-11-20 09:17:34.575234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:55.892 [2024-11-20 09:17:34.575251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.892 [2024-11-20 09:17:34.575301] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:55.892 [2024-11-20 09:17:34.575339] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:55.892 [2024-11-20 09:17:34.575446] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:55.892 [2024-11-20 09:17:34.575485] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:55.892 [2024-11-20 09:17:34.575633] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:55.892 [2024-11-20 09:17:34.575645] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:55.893 [2024-11-20 09:17:34.575656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:55.893 [2024-11-20 09:17:34.575666] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:55.893 [2024-11-20 09:17:34.575679] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:55.893 [2024-11-20 09:17:34.575687] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:55.893 [2024-11-20 09:17:34.575694] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:55.893 [2024-11-20 09:17:34.575701] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:55.893 [2024-11-20 09:17:34.575708] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:55.893 [2024-11-20 09:17:34.575715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.893 [2024-11-20 09:17:34.575722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:55.893 [2024-11-20 09:17:34.575730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:23:55.893 [2024-11-20 09:17:34.575736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.893 [2024-11-20 09:17:34.575835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.893 [2024-11-20 09:17:34.575847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:55.893 [2024-11-20 09:17:34.575855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:55.893 [2024-11-20 09:17:34.575861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.893 [2024-11-20 09:17:34.575991] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:55.893 [2024-11-20 09:17:34.576003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:55.893 [2024-11-20 09:17:34.576011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:55.893 [2024-11-20 09:17:34.576033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:55.893 [2024-11-20 09:17:34.576053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:55.893 [2024-11-20 09:17:34.576066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:55.893 [2024-11-20 09:17:34.576079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:55.893 [2024-11-20 09:17:34.576085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:55.893 [2024-11-20 09:17:34.576091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:55.893 [2024-11-20 09:17:34.576097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:55.893 [2024-11-20 09:17:34.576103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576112] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:55.893 [2024-11-20 09:17:34.576118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:55.893 [2024-11-20 09:17:34.576138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:55.893 [2024-11-20 09:17:34.576157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:55.893 [2024-11-20 09:17:34.576176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:55.893 [2024-11-20 09:17:34.576194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:55.893 [2024-11-20 09:17:34.576213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:55.893 [2024-11-20 09:17:34.576225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:55.893 [2024-11-20 09:17:34.576232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:55.893 [2024-11-20 09:17:34.576238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:55.893 [2024-11-20 09:17:34.576244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:55.893 [2024-11-20 09:17:34.576251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:55.893 [2024-11-20 09:17:34.576257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:55.893 [2024-11-20 09:17:34.576270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:55.893 [2024-11-20 09:17:34.576276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576283] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:55.893 [2024-11-20 09:17:34.576290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:55.893 [2024-11-20 09:17:34.576297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:55.893 [2024-11-20 09:17:34.576313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:55.893 [2024-11-20 09:17:34.576321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:55.893 [2024-11-20 09:17:34.576327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:55.893 [2024-11-20 09:17:34.576333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:55.893 [2024-11-20 09:17:34.576340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:55.893 [2024-11-20 09:17:34.576346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:55.893 [2024-11-20 09:17:34.576353] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:55.893 [2024-11-20 09:17:34.576362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.893 [2024-11-20 09:17:34.576370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:55.893 [2024-11-20 09:17:34.576377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:55.893 [2024-11-20 09:17:34.576384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:55.893 [2024-11-20 09:17:34.576391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:55.893 [2024-11-20 09:17:34.576398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:55.893 [2024-11-20 09:17:34.576405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:55.893 [2024-11-20 09:17:34.576411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:55.893 [2024-11-20 09:17:34.576418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:55.893 [2024-11-20 09:17:34.576425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:55.893 [2024-11-20 09:17:34.576432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:55.893 [2024-11-20 09:17:34.576439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:55.893 [2024-11-20 09:17:34.576445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:55.893 [2024-11-20 09:17:34.576452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:55.893 [2024-11-20 09:17:34.576459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:55.893 [2024-11-20 09:17:34.576466] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:55.893 [2024-11-20 09:17:34.576473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:55.894 [2024-11-20 09:17:34.576481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:55.894 [2024-11-20 09:17:34.576488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:55.894 [2024-11-20 09:17:34.576494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:55.894 [2024-11-20 09:17:34.576501] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:55.894 [2024-11-20 09:17:34.576508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.576515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:55.894 [2024-11-20 09:17:34.576522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:23:55.894 [2024-11-20 09:17:34.576529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.602369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.602407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:55.894 [2024-11-20 09:17:34.602417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.792 ms 00:23:55.894 [2024-11-20 09:17:34.602425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.602512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.602523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:55.894 [2024-11-20 09:17:34.602531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:55.894 [2024-11-20 09:17:34.602538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.640935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.641104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:55.894 [2024-11-20 09:17:34.641122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.342 ms 00:23:55.894 [2024-11-20 09:17:34.641137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.641189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.641199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:55.894 [2024-11-20 09:17:34.641207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:55.894 [2024-11-20 09:17:34.641214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.641571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.641587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:55.894 [2024-11-20 09:17:34.641596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:23:55.894 [2024-11-20 09:17:34.641603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.641728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.641737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:55.894 [2024-11-20 09:17:34.641745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:55.894 [2024-11-20 09:17:34.641752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.654667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.654699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:55.894 [2024-11-20 09:17:34.654709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.895 ms 00:23:55.894 [2024-11-20 09:17:34.654716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.667041] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:55.894 [2024-11-20 09:17:34.667075] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:55.894 [2024-11-20 09:17:34.667088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.667095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:55.894 [2024-11-20 09:17:34.667105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.254 ms 00:23:55.894 [2024-11-20 09:17:34.667112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.692052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.692196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:55.894 [2024-11-20 09:17:34.692223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.899 ms 00:23:55.894 [2024-11-20 09:17:34.692231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.703446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.703554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:55.894 [2024-11-20 09:17:34.703569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.177 ms 00:23:55.894 [2024-11-20 09:17:34.703576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.714599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.714699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:55.894 [2024-11-20 09:17:34.714714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.994 ms 00:23:55.894 [2024-11-20 09:17:34.714721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.715341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.715360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:55.894 [2024-11-20 09:17:34.715369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:23:55.894 [2024-11-20 09:17:34.715376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.769800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.769994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:55.894 [2024-11-20 09:17:34.770013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.406 ms 00:23:55.894 [2024-11-20 09:17:34.770021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.780617] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:55.894 [2024-11-20 09:17:34.783056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.783086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:55.894 [2024-11-20 09:17:34.783098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:23:55.894 [2024-11-20 09:17:34.783106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.783202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.783213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:55.894 [2024-11-20 09:17:34.783222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:55.894 [2024-11-20 09:17:34.783230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.783293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.783303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:55.894 [2024-11-20 09:17:34.783311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:55.894 [2024-11-20 09:17:34.783319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.783337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.783348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:55.894 [2024-11-20 09:17:34.783356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:55.894 [2024-11-20 09:17:34.783363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.783393] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:55.894 [2024-11-20 09:17:34.783403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.783410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:55.894 [2024-11-20 09:17:34.783418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:55.894 [2024-11-20 09:17:34.783425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.894 [2024-11-20 09:17:34.806587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.894 [2024-11-20 09:17:34.806628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:55.894 [2024-11-20 09:17:34.806639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.143 ms 00:23:55.895 [2024-11-20 09:17:34.806647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.895 [2024-11-20 09:17:34.806721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:55.895 [2024-11-20 09:17:34.806731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:55.895 [2024-11-20 09:17:34.806739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:55.895 [2024-11-20 09:17:34.806747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:55.895 [2024-11-20 09:17:34.808195] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.574 ms, result 0 00:23:57.264  [2024-11-20T09:17:37.116Z] Copying: 45/1024 [MB] (45 MBps) [2024-11-20T09:17:38.145Z] Copying: 92/1024 [MB] (46 MBps) [2024-11-20T09:17:39.078Z] Copying: 143/1024 [MB] (50 MBps) [2024-11-20T09:17:40.008Z] Copying: 185/1024 [MB] (42 MBps) [2024-11-20T09:17:40.939Z] Copying: 228/1024 [MB] (42 MBps) [2024-11-20T09:17:41.869Z] Copying: 269/1024 [MB] (41 MBps) [2024-11-20T09:17:43.239Z] Copying: 313/1024 [MB] (43 MBps) [2024-11-20T09:17:44.173Z] Copying: 348/1024 [MB] (35 MBps) [2024-11-20T09:17:45.105Z] Copying: 387/1024 [MB] (38 MBps) [2024-11-20T09:17:46.037Z] Copying: 430/1024 [MB] (43 MBps) [2024-11-20T09:17:46.969Z] Copying: 473/1024 [MB] (42 MBps) [2024-11-20T09:17:47.901Z] Copying: 511/1024 [MB] (37 MBps) [2024-11-20T09:17:48.837Z] Copying: 547/1024 [MB] (36 MBps) [2024-11-20T09:17:50.225Z] Copying: 578/1024 [MB] (30 MBps) [2024-11-20T09:17:51.158Z] Copying: 614/1024 [MB] (36 MBps) [2024-11-20T09:17:52.088Z] Copying: 655/1024 [MB] (41 MBps) [2024-11-20T09:17:53.017Z] Copying: 697/1024 [MB] (41 MBps) [2024-11-20T09:17:53.948Z] Copying: 737/1024 [MB] (40 MBps) [2024-11-20T09:17:54.882Z] Copying: 779/1024 [MB] (41 MBps) [2024-11-20T09:17:56.256Z] Copying: 821/1024 [MB] (42 MBps) [2024-11-20T09:17:57.186Z] Copying: 866/1024 [MB] (44 MBps) [2024-11-20T09:17:58.119Z] Copying: 910/1024 [MB] (44 MBps) [2024-11-20T09:17:59.103Z] Copying: 954/1024 [MB] (44 MBps) [2024-11-20T09:18:00.035Z] Copying: 998/1024 [MB] (44 MBps) [2024-11-20T09:18:00.604Z] Copying: 1023/1024 [MB] (24 MBps) [2024-11-20T09:18:00.604Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-11-20 09:18:00.485339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.485531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:21.685 [2024-11-20 09:18:00.485621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:21.685 [2024-11-20 09:18:00.485651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.685 [2024-11-20 09:18:00.487696] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:21.685 [2024-11-20 09:18:00.493605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.493735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:21.685 [2024-11-20 09:18:00.493808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.765 ms 00:24:21.685 [2024-11-20 09:18:00.493835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.685 [2024-11-20 09:18:00.504830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.505012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:21.685 [2024-11-20 09:18:00.505085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.032 ms 00:24:21.685 [2024-11-20 09:18:00.505155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.685 [2024-11-20 09:18:00.523593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.523764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:21.685 [2024-11-20 09:18:00.523836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:24:21.685 [2024-11-20 09:18:00.523899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.685 [2024-11-20 09:18:00.530159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.530324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:21.685 [2024-11-20 09:18:00.530391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.091 ms 00:24:21.685 [2024-11-20 09:18:00.530461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.685 [2024-11-20 09:18:00.554806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.554986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:21.685 [2024-11-20 09:18:00.555066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.276 ms 00:24:21.685 [2024-11-20 09:18:00.555099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.685 [2024-11-20 09:18:00.569431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.685 [2024-11-20 09:18:00.569587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:21.685 [2024-11-20 09:18:00.569653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.285 ms 00:24:21.685 [2024-11-20 09:18:00.569684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.949 [2024-11-20 09:18:00.761193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.949 [2024-11-20 09:18:00.761372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:21.949 [2024-11-20 09:18:00.761447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 191.452 ms 00:24:21.949 [2024-11-20 09:18:00.761488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.949 [2024-11-20 09:18:00.785519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.949 [2024-11-20 09:18:00.785680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:21.949 [2024-11-20 09:18:00.785749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.965 ms 00:24:21.949 [2024-11-20 09:18:00.785778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.949 [2024-11-20 09:18:00.809305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.949 [2024-11-20 09:18:00.809456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:21.949 [2024-11-20 09:18:00.809519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.318 ms 00:24:21.949 [2024-11-20 09:18:00.809548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.949 [2024-11-20 09:18:00.832752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.949 [2024-11-20 09:18:00.832947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:21.949 [2024-11-20 09:18:00.833357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.157 ms 00:24:21.949 [2024-11-20 09:18:00.833378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.949 [2024-11-20 09:18:00.856599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.949 [2024-11-20 09:18:00.856778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:21.949 [2024-11-20 09:18:00.856798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.123 ms 00:24:21.949 [2024-11-20 09:18:00.856806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.949 [2024-11-20 09:18:00.856842] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:21.949 [2024-11-20 09:18:00.856857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129280 / 261120 wr_cnt: 1 state: open 00:24:21.949 [2024-11-20 09:18:00.856867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.856997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:21.949 [2024-11-20 09:18:00.857421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:21.950 [2024-11-20 09:18:00.857697] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:21.950 [2024-11-20 09:18:00.857710] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 269fda64-7e9a-49c9-9426-a4f0275b0519 00:24:21.950 [2024-11-20 09:18:00.857721] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129280 00:24:21.950 [2024-11-20 09:18:00.857735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130240 00:24:21.950 [2024-11-20 09:18:00.857750] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129280 00:24:21.950 [2024-11-20 09:18:00.857758] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0074 00:24:21.950 [2024-11-20 09:18:00.857765] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:21.950 [2024-11-20 09:18:00.857772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:21.950 [2024-11-20 09:18:00.857779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:21.950 [2024-11-20 09:18:00.857785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:21.950 [2024-11-20 09:18:00.857792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:21.950 [2024-11-20 09:18:00.857799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.950 [2024-11-20 09:18:00.857807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:21.950 [2024-11-20 09:18:00.857816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:24:21.950 [2024-11-20 09:18:00.857824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.870519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.212 [2024-11-20 09:18:00.870567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.212 [2024-11-20 09:18:00.870578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.668 ms 00:24:22.212 [2024-11-20 09:18:00.870586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.870978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.212 [2024-11-20 09:18:00.871000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.212 [2024-11-20 09:18:00.871009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:24:22.212 [2024-11-20 09:18:00.871029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.903903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:00.903956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.212 [2024-11-20 09:18:00.903967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:00.903975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.904045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:00.904054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.212 [2024-11-20 09:18:00.904061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:00.904068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.904158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:00.904168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.212 [2024-11-20 09:18:00.904176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:00.904184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.904198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:00.904207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.212 [2024-11-20 09:18:00.904214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:00.904221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:00.980662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:00.980715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.212 [2024-11-20 09:18:00.980726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:00.980733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:01.043150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:01.043200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.212 [2024-11-20 09:18:01.043212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:01.043219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:01.043278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:01.043287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.212 [2024-11-20 09:18:01.043295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:01.043302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:01.043348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:01.043357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.212 [2024-11-20 09:18:01.043364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:01.043371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:01.043454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:01.043467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.212 [2024-11-20 09:18:01.043475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.212 [2024-11-20 09:18:01.043482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.212 [2024-11-20 09:18:01.043509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.212 [2024-11-20 09:18:01.043518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.213 [2024-11-20 09:18:01.043526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.213 [2024-11-20 09:18:01.043534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.213 [2024-11-20 09:18:01.043567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.213 [2024-11-20 09:18:01.043577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.213 [2024-11-20 09:18:01.043585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.213 [2024-11-20 09:18:01.043592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.213 [2024-11-20 09:18:01.043632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.213 [2024-11-20 09:18:01.043641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.213 [2024-11-20 09:18:01.043649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.213 [2024-11-20 09:18:01.043656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.213 [2024-11-20 09:18:01.043767] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.310 ms, result 0 00:24:26.477 00:24:26.477 00:24:26.477 09:18:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:28.375 09:18:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:28.375 [2024-11-20 09:18:07.114340] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:28.375 [2024-11-20 09:18:07.114467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78337 ] 00:24:28.375 [2024-11-20 09:18:07.266318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.633 [2024-11-20 09:18:07.365319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.893 [2024-11-20 09:18:07.616990] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.893 [2024-11-20 09:18:07.617051] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:28.893 [2024-11-20 09:18:07.770286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.770338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:28.893 [2024-11-20 09:18:07.770355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:28.893 [2024-11-20 09:18:07.770363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.770410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.770420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.893 [2024-11-20 09:18:07.770430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:28.893 [2024-11-20 09:18:07.770438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.770456] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:28.893 [2024-11-20 09:18:07.771189] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:28.893 [2024-11-20 09:18:07.771211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.771220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.893 [2024-11-20 09:18:07.771228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:24:28.893 [2024-11-20 09:18:07.771235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.772314] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:28.893 [2024-11-20 09:18:07.784322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.784370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:28.893 [2024-11-20 09:18:07.784385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.008 ms 00:24:28.893 [2024-11-20 09:18:07.784393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.784454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.784464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:28.893 [2024-11-20 09:18:07.784472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:28.893 [2024-11-20 09:18:07.784479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.789601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.789639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.893 [2024-11-20 09:18:07.789655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.059 ms 00:24:28.893 [2024-11-20 09:18:07.789664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.789751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.789762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.893 [2024-11-20 09:18:07.789771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:28.893 [2024-11-20 09:18:07.789778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.789820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.789833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:28.893 [2024-11-20 09:18:07.789847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:28.893 [2024-11-20 09:18:07.789854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.789895] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:28.893 [2024-11-20 09:18:07.793417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.793447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.893 [2024-11-20 09:18:07.793457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.528 ms 00:24:28.893 [2024-11-20 09:18:07.793467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.793498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.793507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:28.893 [2024-11-20 09:18:07.793515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:28.893 [2024-11-20 09:18:07.793522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.793543] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:28.893 [2024-11-20 09:18:07.793560] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:28.893 [2024-11-20 09:18:07.793595] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:28.893 [2024-11-20 09:18:07.793612] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:28.893 [2024-11-20 09:18:07.793713] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:28.893 [2024-11-20 09:18:07.793723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:28.893 [2024-11-20 09:18:07.793732] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:28.893 [2024-11-20 09:18:07.793742] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:28.893 [2024-11-20 09:18:07.793751] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:28.893 [2024-11-20 09:18:07.793759] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:28.893 [2024-11-20 09:18:07.793766] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:28.893 [2024-11-20 09:18:07.793774] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:28.893 [2024-11-20 09:18:07.793780] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:28.893 [2024-11-20 09:18:07.793790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.793797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:28.893 [2024-11-20 09:18:07.793805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:24:28.893 [2024-11-20 09:18:07.793812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.893 [2024-11-20 09:18:07.793908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.893 [2024-11-20 09:18:07.793918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:28.893 [2024-11-20 09:18:07.793925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:24:28.894 [2024-11-20 09:18:07.793932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.894 [2024-11-20 09:18:07.794049] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:28.894 [2024-11-20 09:18:07.794061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:28.894 [2024-11-20 09:18:07.794069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:28.894 [2024-11-20 09:18:07.794091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:28.894 [2024-11-20 09:18:07.794114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.894 [2024-11-20 09:18:07.794127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:28.894 [2024-11-20 09:18:07.794134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:28.894 [2024-11-20 09:18:07.794140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:28.894 [2024-11-20 09:18:07.794147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:28.894 [2024-11-20 09:18:07.794153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:28.894 [2024-11-20 09:18:07.794166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:28.894 [2024-11-20 09:18:07.794180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:28.894 [2024-11-20 09:18:07.794200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:28.894 [2024-11-20 09:18:07.794220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:28.894 [2024-11-20 09:18:07.794239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:28.894 [2024-11-20 09:18:07.794261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:28.894 [2024-11-20 09:18:07.794293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.894 [2024-11-20 09:18:07.794309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:28.894 [2024-11-20 09:18:07.794315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:28.894 [2024-11-20 09:18:07.794322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:28.894 [2024-11-20 09:18:07.794328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:28.894 [2024-11-20 09:18:07.794335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:28.894 [2024-11-20 09:18:07.794341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:28.894 [2024-11-20 09:18:07.794359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:28.894 [2024-11-20 09:18:07.794371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794382] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:28.894 [2024-11-20 09:18:07.794392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:28.894 [2024-11-20 09:18:07.794399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:28.894 [2024-11-20 09:18:07.794414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:28.894 [2024-11-20 09:18:07.794421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:28.894 [2024-11-20 09:18:07.794429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:28.894 [2024-11-20 09:18:07.794436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:28.894 [2024-11-20 09:18:07.794442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:28.894 [2024-11-20 09:18:07.794449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:28.894 [2024-11-20 09:18:07.794457] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:28.894 [2024-11-20 09:18:07.794466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:28.894 [2024-11-20 09:18:07.794482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:28.894 [2024-11-20 09:18:07.794489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:28.894 [2024-11-20 09:18:07.794496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:28.894 [2024-11-20 09:18:07.794503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:28.894 [2024-11-20 09:18:07.794510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:28.894 [2024-11-20 09:18:07.794517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:28.894 [2024-11-20 09:18:07.794524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:28.894 [2024-11-20 09:18:07.794532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:28.894 [2024-11-20 09:18:07.794539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:28.894 [2024-11-20 09:18:07.794573] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:28.894 [2024-11-20 09:18:07.794584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:28.894 [2024-11-20 09:18:07.794599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:28.894 [2024-11-20 09:18:07.794606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:28.894 [2024-11-20 09:18:07.794613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:28.894 [2024-11-20 09:18:07.794621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.894 [2024-11-20 09:18:07.794628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:28.894 [2024-11-20 09:18:07.794635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:24:28.894 [2024-11-20 09:18:07.794642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.821198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.821249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.154 [2024-11-20 09:18:07.821263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.502 ms 00:24:29.154 [2024-11-20 09:18:07.821271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.821382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.821392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:29.154 [2024-11-20 09:18:07.821400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:24:29.154 [2024-11-20 09:18:07.821407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.862323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.862378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.154 [2024-11-20 09:18:07.862391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.851 ms 00:24:29.154 [2024-11-20 09:18:07.862399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.862454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.862463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.154 [2024-11-20 09:18:07.862472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:29.154 [2024-11-20 09:18:07.862482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.862858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.862899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.154 [2024-11-20 09:18:07.862909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.304 ms 00:24:29.154 [2024-11-20 09:18:07.862917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.863045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.863055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.154 [2024-11-20 09:18:07.863062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:29.154 [2024-11-20 09:18:07.863074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.876384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.876426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.154 [2024-11-20 09:18:07.876440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.289 ms 00:24:29.154 [2024-11-20 09:18:07.876448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.889201] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:29.154 [2024-11-20 09:18:07.889248] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:29.154 [2024-11-20 09:18:07.889260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.889269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:29.154 [2024-11-20 09:18:07.889278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:24:29.154 [2024-11-20 09:18:07.889285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.914045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.914118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:29.154 [2024-11-20 09:18:07.914131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.697 ms 00:24:29.154 [2024-11-20 09:18:07.914139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.926232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.926284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:29.154 [2024-11-20 09:18:07.926295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.023 ms 00:24:29.154 [2024-11-20 09:18:07.926302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.937848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.937907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:29.154 [2024-11-20 09:18:07.937918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.502 ms 00:24:29.154 [2024-11-20 09:18:07.937925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.938547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.938573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:29.154 [2024-11-20 09:18:07.938582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:24:29.154 [2024-11-20 09:18:07.938592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:07.994434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:07.994492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:29.154 [2024-11-20 09:18:07.994510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.822 ms 00:24:29.154 [2024-11-20 09:18:07.994518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.005098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:29.154 [2024-11-20 09:18:08.007719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.007752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:29.154 [2024-11-20 09:18:08.007764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.136 ms 00:24:29.154 [2024-11-20 09:18:08.007773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.007906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.007918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:29.154 [2024-11-20 09:18:08.007939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:29.154 [2024-11-20 09:18:08.007950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.009386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.009418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:29.154 [2024-11-20 09:18:08.009428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:24:29.154 [2024-11-20 09:18:08.009436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.009462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.009470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:29.154 [2024-11-20 09:18:08.009478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:29.154 [2024-11-20 09:18:08.009485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.009519] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:29.154 [2024-11-20 09:18:08.009531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.009538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:29.154 [2024-11-20 09:18:08.009546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:29.154 [2024-11-20 09:18:08.009553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.032824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.032866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:29.154 [2024-11-20 09:18:08.032891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.253 ms 00:24:29.154 [2024-11-20 09:18:08.032903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.032974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.154 [2024-11-20 09:18:08.032984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:29.154 [2024-11-20 09:18:08.032993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:29.154 [2024-11-20 09:18:08.033001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.154 [2024-11-20 09:18:08.033966] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.257 ms, result 0 00:24:30.527  [2024-11-20T09:18:10.379Z] Copying: 936/1048576 [kB] (936 kBps) [2024-11-20T09:18:11.313Z] Copying: 5136/1048576 [kB] (4200 kBps) [2024-11-20T09:18:12.245Z] Copying: 51/1024 [MB] (46 MBps) [2024-11-20T09:18:13.395Z] Copying: 105/1024 [MB] (53 MBps) [2024-11-20T09:18:14.329Z] Copying: 159/1024 [MB] (53 MBps) [2024-11-20T09:18:15.263Z] Copying: 212/1024 [MB] (53 MBps) [2024-11-20T09:18:16.636Z] Copying: 266/1024 [MB] (54 MBps) [2024-11-20T09:18:17.581Z] Copying: 317/1024 [MB] (50 MBps) [2024-11-20T09:18:18.521Z] Copying: 368/1024 [MB] (51 MBps) [2024-11-20T09:18:19.453Z] Copying: 421/1024 [MB] (52 MBps) [2024-11-20T09:18:20.386Z] Copying: 472/1024 [MB] (51 MBps) [2024-11-20T09:18:21.320Z] Copying: 522/1024 [MB] (49 MBps) [2024-11-20T09:18:22.255Z] Copying: 574/1024 [MB] (51 MBps) [2024-11-20T09:18:23.629Z] Copying: 628/1024 [MB] (54 MBps) [2024-11-20T09:18:24.562Z] Copying: 676/1024 [MB] (47 MBps) [2024-11-20T09:18:25.494Z] Copying: 727/1024 [MB] (50 MBps) [2024-11-20T09:18:26.426Z] Copying: 779/1024 [MB] (52 MBps) [2024-11-20T09:18:27.395Z] Copying: 835/1024 [MB] (55 MBps) [2024-11-20T09:18:28.327Z] Copying: 889/1024 [MB] (53 MBps) [2024-11-20T09:18:29.258Z] Copying: 940/1024 [MB] (51 MBps) [2024-11-20T09:18:29.823Z] Copying: 993/1024 [MB] (53 MBps) [2024-11-20T09:18:29.823Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 09:18:29.821039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:50.904 [2024-11-20 09:18:29.821090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:50.904 [2024-11-20 09:18:29.821110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:50.904 [2024-11-20 09:18:29.821118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:50.904 [2024-11-20 09:18:29.821138] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:51.162 [2024-11-20 09:18:29.824096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.824202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:51.162 [2024-11-20 09:18:29.824262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:24:51.162 [2024-11-20 09:18:29.824285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-20 09:18:29.824544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.824625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:51.162 [2024-11-20 09:18:29.824685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:24:51.162 [2024-11-20 09:18:29.824708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-20 09:18:29.833425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.833536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:51.162 [2024-11-20 09:18:29.833592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.661 ms 00:24:51.162 [2024-11-20 09:18:29.833616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-20 09:18:29.839861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.839970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:51.162 [2024-11-20 09:18:29.840023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.180 ms 00:24:51.162 [2024-11-20 09:18:29.840053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-20 09:18:29.863781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.863816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:51.162 [2024-11-20 09:18:29.863827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.524 ms 00:24:51.162 [2024-11-20 09:18:29.863835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-20 09:18:29.877186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.877217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:51.162 [2024-11-20 09:18:29.877228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.321 ms 00:24:51.162 [2024-11-20 09:18:29.877237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.162 [2024-11-20 09:18:29.878609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.162 [2024-11-20 09:18:29.878709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:51.162 [2024-11-20 09:18:29.878723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.338 ms 00:24:51.162 [2024-11-20 09:18:29.878730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.163 [2024-11-20 09:18:29.901769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.163 [2024-11-20 09:18:29.901894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:51.163 [2024-11-20 09:18:29.901910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.017 ms 00:24:51.163 [2024-11-20 09:18:29.901917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.163 [2024-11-20 09:18:29.927344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.163 [2024-11-20 09:18:29.927376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:51.163 [2024-11-20 09:18:29.927396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.398 ms 00:24:51.163 [2024-11-20 09:18:29.927404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.163 [2024-11-20 09:18:29.949517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.163 [2024-11-20 09:18:29.949632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:51.163 [2024-11-20 09:18:29.949646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.082 ms 00:24:51.163 [2024-11-20 09:18:29.949653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.163 [2024-11-20 09:18:29.971687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.163 [2024-11-20 09:18:29.971724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:51.163 [2024-11-20 09:18:29.971734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.983 ms 00:24:51.163 [2024-11-20 09:18:29.971741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.163 [2024-11-20 09:18:29.971774] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:51.163 [2024-11-20 09:18:29.971788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:51.163 [2024-11-20 09:18:29.971798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:24:51.163 [2024-11-20 09:18:29.971817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.971994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:51.163 [2024-11-20 09:18:29.972499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:51.164 [2024-11-20 09:18:29.972589] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:51.164 [2024-11-20 09:18:29.972597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 269fda64-7e9a-49c9-9426-a4f0275b0519 00:24:51.164 [2024-11-20 09:18:29.972605] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:24:51.164 [2024-11-20 09:18:29.972612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135360 00:24:51.164 [2024-11-20 09:18:29.972618] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133376 00:24:51.164 [2024-11-20 09:18:29.972630] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0149 00:24:51.164 [2024-11-20 09:18:29.972638] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:51.164 [2024-11-20 09:18:29.972645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:51.164 [2024-11-20 09:18:29.972652] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:51.164 [2024-11-20 09:18:29.972665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:51.164 [2024-11-20 09:18:29.972671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:51.164 [2024-11-20 09:18:29.972678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.164 [2024-11-20 09:18:29.972686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:51.164 [2024-11-20 09:18:29.972693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:24:51.164 [2024-11-20 09:18:29.972700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.164 [2024-11-20 09:18:29.984855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.164 [2024-11-20 09:18:29.984911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:51.164 [2024-11-20 09:18:29.984923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.138 ms 00:24:51.164 [2024-11-20 09:18:29.984930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.164 [2024-11-20 09:18:29.985262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:51.164 [2024-11-20 09:18:29.985275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:51.164 [2024-11-20 09:18:29.985282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:24:51.164 [2024-11-20 09:18:29.985289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.164 [2024-11-20 09:18:30.017670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.164 [2024-11-20 09:18:30.017704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:51.164 [2024-11-20 09:18:30.017714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.164 [2024-11-20 09:18:30.017722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.164 [2024-11-20 09:18:30.017780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.164 [2024-11-20 09:18:30.017788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:51.164 [2024-11-20 09:18:30.017796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.164 [2024-11-20 09:18:30.017803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.164 [2024-11-20 09:18:30.017855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.164 [2024-11-20 09:18:30.017868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:51.164 [2024-11-20 09:18:30.017892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.164 [2024-11-20 09:18:30.017900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.164 [2024-11-20 09:18:30.017914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.164 [2024-11-20 09:18:30.017921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:51.164 [2024-11-20 09:18:30.017928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.164 [2024-11-20 09:18:30.017936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.421 [2024-11-20 09:18:30.094660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.421 [2024-11-20 09:18:30.094708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:51.421 [2024-11-20 09:18:30.094719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.421 [2024-11-20 09:18:30.094727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.421 [2024-11-20 09:18:30.156293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.421 [2024-11-20 09:18:30.156339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:51.421 [2024-11-20 09:18:30.156350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.421 [2024-11-20 09:18:30.156357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.421 [2024-11-20 09:18:30.156423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.421 [2024-11-20 09:18:30.156432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:51.421 [2024-11-20 09:18:30.156445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.421 [2024-11-20 09:18:30.156452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.421 [2024-11-20 09:18:30.156484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.421 [2024-11-20 09:18:30.156492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:51.421 [2024-11-20 09:18:30.156500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.421 [2024-11-20 09:18:30.156506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.421 [2024-11-20 09:18:30.156586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.421 [2024-11-20 09:18:30.156596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:51.421 [2024-11-20 09:18:30.156604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.422 [2024-11-20 09:18:30.156614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.422 [2024-11-20 09:18:30.156641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.422 [2024-11-20 09:18:30.156650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:51.422 [2024-11-20 09:18:30.156657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.422 [2024-11-20 09:18:30.156664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.422 [2024-11-20 09:18:30.156696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.422 [2024-11-20 09:18:30.156704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:51.422 [2024-11-20 09:18:30.156712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.422 [2024-11-20 09:18:30.156721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.422 [2024-11-20 09:18:30.156759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:51.422 [2024-11-20 09:18:30.156768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:51.422 [2024-11-20 09:18:30.156775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:51.422 [2024-11-20 09:18:30.156782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:51.422 [2024-11-20 09:18:30.156914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.824 ms, result 0 00:24:51.986 00:24:51.986 00:24:51.986 09:18:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:54.610 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:54.610 09:18:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:54.610 [2024-11-20 09:18:33.089015] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:24:54.610 [2024-11-20 09:18:33.089138] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78607 ] 00:24:54.610 [2024-11-20 09:18:33.247937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.610 [2024-11-20 09:18:33.352091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.868 [2024-11-20 09:18:33.611215] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.868 [2024-11-20 09:18:33.611441] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.868 [2024-11-20 09:18:33.765385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.868 [2024-11-20 09:18:33.765433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:54.868 [2024-11-20 09:18:33.765450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:54.868 [2024-11-20 09:18:33.765458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.868 [2024-11-20 09:18:33.765509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.868 [2024-11-20 09:18:33.765520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.868 [2024-11-20 09:18:33.765531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:54.868 [2024-11-20 09:18:33.765538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.868 [2024-11-20 09:18:33.765557] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:54.868 [2024-11-20 09:18:33.766249] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:54.868 [2024-11-20 09:18:33.766394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.868 [2024-11-20 09:18:33.766405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.868 [2024-11-20 09:18:33.766415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:24:54.868 [2024-11-20 09:18:33.766423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.868 [2024-11-20 09:18:33.767527] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:54.868 [2024-11-20 09:18:33.780299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.868 [2024-11-20 09:18:33.780337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:54.868 [2024-11-20 09:18:33.780348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:24:54.868 [2024-11-20 09:18:33.780357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.868 [2024-11-20 09:18:33.780420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.868 [2024-11-20 09:18:33.780430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:54.868 [2024-11-20 09:18:33.780438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:54.868 [2024-11-20 09:18:33.780445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.785626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.785660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.127 [2024-11-20 09:18:33.785669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.118 ms 00:24:55.127 [2024-11-20 09:18:33.785677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.785756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.785765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.127 [2024-11-20 09:18:33.785773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:24:55.127 [2024-11-20 09:18:33.785781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.785822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.785832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:55.127 [2024-11-20 09:18:33.785840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:55.127 [2024-11-20 09:18:33.785847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.785884] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:55.127 [2024-11-20 09:18:33.789310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.789340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.127 [2024-11-20 09:18:33.789349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.446 ms 00:24:55.127 [2024-11-20 09:18:33.789359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.789388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.789396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:55.127 [2024-11-20 09:18:33.789404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:55.127 [2024-11-20 09:18:33.789411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.789432] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:55.127 [2024-11-20 09:18:33.789449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:55.127 [2024-11-20 09:18:33.789483] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:55.127 [2024-11-20 09:18:33.789500] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:55.127 [2024-11-20 09:18:33.789603] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:55.127 [2024-11-20 09:18:33.789613] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:55.127 [2024-11-20 09:18:33.789623] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:55.127 [2024-11-20 09:18:33.789633] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:55.127 [2024-11-20 09:18:33.789641] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:55.127 [2024-11-20 09:18:33.789649] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:55.127 [2024-11-20 09:18:33.789657] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:55.127 [2024-11-20 09:18:33.789664] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:55.127 [2024-11-20 09:18:33.789672] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:55.127 [2024-11-20 09:18:33.789681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.789688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:55.127 [2024-11-20 09:18:33.789696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:24:55.127 [2024-11-20 09:18:33.789703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.127 [2024-11-20 09:18:33.789785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.127 [2024-11-20 09:18:33.789792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:55.127 [2024-11-20 09:18:33.789799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:55.127 [2024-11-20 09:18:33.789807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.128 [2024-11-20 09:18:33.789937] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:55.128 [2024-11-20 09:18:33.789951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:55.128 [2024-11-20 09:18:33.789959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:55.128 [2024-11-20 09:18:33.789968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.789975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:55.128 [2024-11-20 09:18:33.789982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.789989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:55.128 [2024-11-20 09:18:33.789996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:55.128 [2024-11-20 09:18:33.790003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:55.128 [2024-11-20 09:18:33.790015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:55.128 [2024-11-20 09:18:33.790022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:55.128 [2024-11-20 09:18:33.790028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:55.128 [2024-11-20 09:18:33.790035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:55.128 [2024-11-20 09:18:33.790041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:55.128 [2024-11-20 09:18:33.790052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:55.128 [2024-11-20 09:18:33.790065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:55.128 [2024-11-20 09:18:33.790084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:55.128 [2024-11-20 09:18:33.790103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:55.128 [2024-11-20 09:18:33.790124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:55.128 [2024-11-20 09:18:33.790143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:55.128 [2024-11-20 09:18:33.790163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:55.128 [2024-11-20 09:18:33.790176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:55.128 [2024-11-20 09:18:33.790183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:55.128 [2024-11-20 09:18:33.790190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:55.128 [2024-11-20 09:18:33.790196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:55.128 [2024-11-20 09:18:33.790203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:55.128 [2024-11-20 09:18:33.790210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:55.128 [2024-11-20 09:18:33.790222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:55.128 [2024-11-20 09:18:33.790229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790235] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:55.128 [2024-11-20 09:18:33.790242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:55.128 [2024-11-20 09:18:33.790250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:55.128 [2024-11-20 09:18:33.790264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:55.128 [2024-11-20 09:18:33.790270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:55.128 [2024-11-20 09:18:33.790277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:55.128 [2024-11-20 09:18:33.790283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:55.128 [2024-11-20 09:18:33.790290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:55.128 [2024-11-20 09:18:33.790296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:55.128 [2024-11-20 09:18:33.790304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:55.128 [2024-11-20 09:18:33.790313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:55.128 [2024-11-20 09:18:33.790329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:55.128 [2024-11-20 09:18:33.790336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:55.128 [2024-11-20 09:18:33.790343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:55.128 [2024-11-20 09:18:33.790350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:55.128 [2024-11-20 09:18:33.790357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:55.128 [2024-11-20 09:18:33.790363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:55.128 [2024-11-20 09:18:33.790370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:55.128 [2024-11-20 09:18:33.790377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:55.128 [2024-11-20 09:18:33.790384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:55.128 [2024-11-20 09:18:33.790419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:55.128 [2024-11-20 09:18:33.790429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:55.128 [2024-11-20 09:18:33.790444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:55.128 [2024-11-20 09:18:33.790451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:55.128 [2024-11-20 09:18:33.790457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:55.128 [2024-11-20 09:18:33.790465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.128 [2024-11-20 09:18:33.790472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:55.128 [2024-11-20 09:18:33.790479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:24:55.128 [2024-11-20 09:18:33.790485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.816773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.816933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.129 [2024-11-20 09:18:33.816949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.237 ms 00:24:55.129 [2024-11-20 09:18:33.816957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.817054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.817063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:55.129 [2024-11-20 09:18:33.817072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:55.129 [2024-11-20 09:18:33.817078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.857740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.857795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.129 [2024-11-20 09:18:33.857808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.599 ms 00:24:55.129 [2024-11-20 09:18:33.857817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.857884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.857895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.129 [2024-11-20 09:18:33.857904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:55.129 [2024-11-20 09:18:33.857915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.858298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.858321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.129 [2024-11-20 09:18:33.858331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:24:55.129 [2024-11-20 09:18:33.858339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.858468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.858477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.129 [2024-11-20 09:18:33.858485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:55.129 [2024-11-20 09:18:33.858497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.872062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.872099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.129 [2024-11-20 09:18:33.872113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.546 ms 00:24:55.129 [2024-11-20 09:18:33.872121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.884792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:55.129 [2024-11-20 09:18:33.884831] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:55.129 [2024-11-20 09:18:33.884843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.884852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:55.129 [2024-11-20 09:18:33.884861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.615 ms 00:24:55.129 [2024-11-20 09:18:33.884868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.909310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.909371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:55.129 [2024-11-20 09:18:33.909383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.382 ms 00:24:55.129 [2024-11-20 09:18:33.909391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.921327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.921480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:55.129 [2024-11-20 09:18:33.921498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.886 ms 00:24:55.129 [2024-11-20 09:18:33.921505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.932848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.932986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:55.129 [2024-11-20 09:18:33.933004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.309 ms 00:24:55.129 [2024-11-20 09:18:33.933012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.933629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.933651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:55.129 [2024-11-20 09:18:33.933661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:24:55.129 [2024-11-20 09:18:33.933671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:33.990213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:33.990272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:55.129 [2024-11-20 09:18:33.990291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.523 ms 00:24:55.129 [2024-11-20 09:18:33.990300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.001306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:55.129 [2024-11-20 09:18:34.004049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.004211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:55.129 [2024-11-20 09:18:34.004229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.693 ms 00:24:55.129 [2024-11-20 09:18:34.004237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.004347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.004359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:55.129 [2024-11-20 09:18:34.004367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:55.129 [2024-11-20 09:18:34.004377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.004967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.004992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:55.129 [2024-11-20 09:18:34.005002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:24:55.129 [2024-11-20 09:18:34.005009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.005032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.005040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:55.129 [2024-11-20 09:18:34.005049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:55.129 [2024-11-20 09:18:34.005056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.005089] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:55.129 [2024-11-20 09:18:34.005101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.005108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:55.129 [2024-11-20 09:18:34.005116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:55.129 [2024-11-20 09:18:34.005123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.029026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.029070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:55.129 [2024-11-20 09:18:34.029083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.885 ms 00:24:55.129 [2024-11-20 09:18:34.029096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.029171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.129 [2024-11-20 09:18:34.029181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:55.129 [2024-11-20 09:18:34.029189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:55.129 [2024-11-20 09:18:34.029196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.129 [2024-11-20 09:18:34.030155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.334 ms, result 0 00:24:56.504  [2024-11-20T09:18:36.357Z] Copying: 47/1024 [MB] (47 MBps) [2024-11-20T09:18:37.289Z] Copying: 96/1024 [MB] (48 MBps) [2024-11-20T09:18:38.220Z] Copying: 146/1024 [MB] (50 MBps) [2024-11-20T09:18:39.595Z] Copying: 190/1024 [MB] (44 MBps) [2024-11-20T09:18:40.528Z] Copying: 237/1024 [MB] (47 MBps) [2024-11-20T09:18:41.463Z] Copying: 281/1024 [MB] (44 MBps) [2024-11-20T09:18:42.396Z] Copying: 328/1024 [MB] (46 MBps) [2024-11-20T09:18:43.329Z] Copying: 374/1024 [MB] (46 MBps) [2024-11-20T09:18:44.262Z] Copying: 422/1024 [MB] (47 MBps) [2024-11-20T09:18:45.224Z] Copying: 468/1024 [MB] (46 MBps) [2024-11-20T09:18:46.597Z] Copying: 516/1024 [MB] (47 MBps) [2024-11-20T09:18:47.531Z] Copying: 560/1024 [MB] (44 MBps) [2024-11-20T09:18:48.464Z] Copying: 606/1024 [MB] (46 MBps) [2024-11-20T09:18:49.397Z] Copying: 652/1024 [MB] (45 MBps) [2024-11-20T09:18:50.380Z] Copying: 705/1024 [MB] (53 MBps) [2024-11-20T09:18:51.314Z] Copying: 752/1024 [MB] (46 MBps) [2024-11-20T09:18:52.248Z] Copying: 805/1024 [MB] (53 MBps) [2024-11-20T09:18:53.689Z] Copying: 855/1024 [MB] (49 MBps) [2024-11-20T09:18:54.254Z] Copying: 903/1024 [MB] (48 MBps) [2024-11-20T09:18:55.629Z] Copying: 952/1024 [MB] (49 MBps) [2024-11-20T09:18:55.888Z] Copying: 1001/1024 [MB] (49 MBps) [2024-11-20T09:18:55.888Z] Copying: 1024/1024 [MB] (average 47 MBps)[2024-11-20 09:18:55.749540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.749598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:16.969 [2024-11-20 09:18:55.749612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:16.969 [2024-11-20 09:18:55.749621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.749642] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:16.969 [2024-11-20 09:18:55.752716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.752760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:16.969 [2024-11-20 09:18:55.752782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:25:16.969 [2024-11-20 09:18:55.752796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.753142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.753172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:16.969 [2024-11-20 09:18:55.753186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:25:16.969 [2024-11-20 09:18:55.753197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.758796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.758825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:16.969 [2024-11-20 09:18:55.758840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.579 ms 00:25:16.969 [2024-11-20 09:18:55.758852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.767192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.767217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:16.969 [2024-11-20 09:18:55.767227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.298 ms 00:25:16.969 [2024-11-20 09:18:55.767234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.790126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.790157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:16.969 [2024-11-20 09:18:55.790167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.838 ms 00:25:16.969 [2024-11-20 09:18:55.790175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.803802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.803834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:16.969 [2024-11-20 09:18:55.803845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.609 ms 00:25:16.969 [2024-11-20 09:18:55.803854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.804991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.805022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:16.969 [2024-11-20 09:18:55.805032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:25:16.969 [2024-11-20 09:18:55.805039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.827847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.969 [2024-11-20 09:18:55.827997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:16.969 [2024-11-20 09:18:55.828014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.794 ms 00:25:16.969 [2024-11-20 09:18:55.828022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.969 [2024-11-20 09:18:55.849963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.970 [2024-11-20 09:18:55.850003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.970 [2024-11-20 09:18:55.850013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.921 ms 00:25:16.970 [2024-11-20 09:18:55.850020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.970 [2024-11-20 09:18:55.872210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.970 [2024-11-20 09:18:55.872239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.970 [2024-11-20 09:18:55.872248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.172 ms 00:25:16.970 [2024-11-20 09:18:55.872255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.229 [2024-11-20 09:18:55.893974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.229 [2024-11-20 09:18:55.894004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:17.229 [2024-11-20 09:18:55.894014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.679 ms 00:25:17.229 [2024-11-20 09:18:55.894022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.229 [2024-11-20 09:18:55.894038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:17.229 [2024-11-20 09:18:55.894051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:17.229 [2024-11-20 09:18:55.894067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:17.229 [2024-11-20 09:18:55.894075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:17.229 [2024-11-20 09:18:55.894590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:17.230 [2024-11-20 09:18:55.894813] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:17.230 [2024-11-20 09:18:55.894823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 269fda64-7e9a-49c9-9426-a4f0275b0519 00:25:17.230 [2024-11-20 09:18:55.894831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:17.230 [2024-11-20 09:18:55.894838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:17.230 [2024-11-20 09:18:55.894845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:17.230 [2024-11-20 09:18:55.894853] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:17.230 [2024-11-20 09:18:55.894859] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:17.230 [2024-11-20 09:18:55.894867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:17.230 [2024-11-20 09:18:55.894894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:17.230 [2024-11-20 09:18:55.894901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:17.230 [2024-11-20 09:18:55.894907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:17.230 [2024-11-20 09:18:55.894914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.230 [2024-11-20 09:18:55.894922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:17.230 [2024-11-20 09:18:55.894930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:25:17.230 [2024-11-20 09:18:55.894937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:55.907116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.230 [2024-11-20 09:18:55.907145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:17.230 [2024-11-20 09:18:55.907154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:25:17.230 [2024-11-20 09:18:55.907162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:55.907493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.230 [2024-11-20 09:18:55.907509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:17.230 [2024-11-20 09:18:55.907522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:25:17.230 [2024-11-20 09:18:55.907529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:55.939640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:55.939677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:17.230 [2024-11-20 09:18:55.939687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:55.939695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:55.939748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:55.939756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:17.230 [2024-11-20 09:18:55.939768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:55.939782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:55.939835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:55.939845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:17.230 [2024-11-20 09:18:55.939853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:55.939860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:55.939891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:55.939899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:17.230 [2024-11-20 09:18:55.939906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:55.939918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.014911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.014965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:17.230 [2024-11-20 09:18:56.014978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.014986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:17.230 [2024-11-20 09:18:56.079367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.230 [2024-11-20 09:18:56.079471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.230 [2024-11-20 09:18:56.079527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.230 [2024-11-20 09:18:56.079637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:17.230 [2024-11-20 09:18:56.079689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.230 [2024-11-20 09:18:56.079747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:17.230 [2024-11-20 09:18:56.079822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.230 [2024-11-20 09:18:56.079830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:17.230 [2024-11-20 09:18:56.079837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.230 [2024-11-20 09:18:56.079973] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 330.407 ms, result 0 00:25:18.164 00:25:18.164 00:25:18.164 09:18:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:20.062 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:20.062 09:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:20.062 09:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:20.062 09:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:20.063 09:18:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:20.321 Process with pid 77305 is not found 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77305 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 77305 ']' 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 77305 00:25:20.321 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77305) - No such process 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 77305 is not found' 00:25:20.321 09:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:20.578 Remove shared memory files 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:20.578 ************************************ 00:25:20.578 END TEST ftl_dirty_shutdown 00:25:20.578 ************************************ 00:25:20.578 00:25:20.578 real 2m28.544s 00:25:20.578 user 2m48.673s 00:25:20.578 sys 0m24.895s 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.578 09:18:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:20.578 09:18:59 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:20.578 09:18:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:20.578 09:18:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.578 09:18:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:20.578 ************************************ 00:25:20.578 START TEST ftl_upgrade_shutdown 00:25:20.579 ************************************ 00:25:20.579 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:20.836 * Looking for test storage... 00:25:20.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.836 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.836 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.836 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.836 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.836 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.837 --rc genhtml_branch_coverage=1 00:25:20.837 --rc genhtml_function_coverage=1 00:25:20.837 --rc genhtml_legend=1 00:25:20.837 --rc geninfo_all_blocks=1 00:25:20.837 --rc geninfo_unexecuted_blocks=1 00:25:20.837 00:25:20.837 ' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.837 --rc genhtml_branch_coverage=1 00:25:20.837 --rc genhtml_function_coverage=1 00:25:20.837 --rc genhtml_legend=1 00:25:20.837 --rc geninfo_all_blocks=1 00:25:20.837 --rc geninfo_unexecuted_blocks=1 00:25:20.837 00:25:20.837 ' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.837 --rc genhtml_branch_coverage=1 00:25:20.837 --rc genhtml_function_coverage=1 00:25:20.837 --rc genhtml_legend=1 00:25:20.837 --rc geninfo_all_blocks=1 00:25:20.837 --rc geninfo_unexecuted_blocks=1 00:25:20.837 00:25:20.837 ' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.837 --rc genhtml_branch_coverage=1 00:25:20.837 --rc genhtml_function_coverage=1 00:25:20.837 --rc genhtml_legend=1 00:25:20.837 --rc geninfo_all_blocks=1 00:25:20.837 --rc geninfo_unexecuted_blocks=1 00:25:20.837 00:25:20.837 ' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:20.837 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78955 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78955 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78955 ']' 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.838 09:18:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:20.838 [2024-11-20 09:18:59.705225] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:20.838 [2024-11-20 09:18:59.705339] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78955 ] 00:25:21.096 [2024-11-20 09:18:59.856928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.096 [2024-11-20 09:18:59.956789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.030 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:22.031 09:19:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:22.289 { 00:25:22.289 "name": "basen1", 00:25:22.289 "aliases": [ 00:25:22.289 "e42eb7db-6d4d-40bf-bff6-b6e34d63f466" 00:25:22.289 ], 00:25:22.289 "product_name": "NVMe disk", 00:25:22.289 "block_size": 4096, 00:25:22.289 "num_blocks": 1310720, 00:25:22.289 "uuid": "e42eb7db-6d4d-40bf-bff6-b6e34d63f466", 00:25:22.289 "numa_id": -1, 00:25:22.289 "assigned_rate_limits": { 00:25:22.289 "rw_ios_per_sec": 0, 00:25:22.289 "rw_mbytes_per_sec": 0, 00:25:22.289 "r_mbytes_per_sec": 0, 00:25:22.289 "w_mbytes_per_sec": 0 00:25:22.289 }, 00:25:22.289 "claimed": true, 00:25:22.289 "claim_type": "read_many_write_one", 00:25:22.289 "zoned": false, 00:25:22.289 "supported_io_types": { 00:25:22.289 "read": true, 00:25:22.289 "write": true, 00:25:22.289 "unmap": true, 00:25:22.289 "flush": true, 00:25:22.289 "reset": true, 00:25:22.289 "nvme_admin": true, 00:25:22.289 "nvme_io": true, 00:25:22.289 "nvme_io_md": false, 00:25:22.289 "write_zeroes": true, 00:25:22.289 "zcopy": false, 00:25:22.289 "get_zone_info": false, 00:25:22.289 "zone_management": false, 00:25:22.289 "zone_append": false, 00:25:22.289 "compare": true, 00:25:22.289 "compare_and_write": false, 00:25:22.289 "abort": true, 00:25:22.289 "seek_hole": false, 00:25:22.289 "seek_data": false, 00:25:22.289 "copy": true, 00:25:22.289 "nvme_iov_md": false 00:25:22.289 }, 00:25:22.289 "driver_specific": { 00:25:22.289 "nvme": [ 00:25:22.289 { 00:25:22.289 "pci_address": "0000:00:11.0", 00:25:22.289 "trid": { 00:25:22.289 "trtype": "PCIe", 00:25:22.289 "traddr": "0000:00:11.0" 00:25:22.289 }, 00:25:22.289 "ctrlr_data": { 00:25:22.289 "cntlid": 0, 00:25:22.289 "vendor_id": "0x1b36", 00:25:22.289 "model_number": "QEMU NVMe Ctrl", 00:25:22.289 "serial_number": "12341", 00:25:22.289 "firmware_revision": "8.0.0", 00:25:22.289 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:22.289 "oacs": { 00:25:22.289 "security": 0, 00:25:22.289 "format": 1, 00:25:22.289 "firmware": 0, 00:25:22.289 "ns_manage": 1 00:25:22.289 }, 00:25:22.289 "multi_ctrlr": false, 00:25:22.289 "ana_reporting": false 00:25:22.289 }, 00:25:22.289 "vs": { 00:25:22.289 "nvme_version": "1.4" 00:25:22.289 }, 00:25:22.289 "ns_data": { 00:25:22.289 "id": 1, 00:25:22.289 "can_share": false 00:25:22.289 } 00:25:22.289 } 00:25:22.289 ], 00:25:22.289 "mp_policy": "active_passive" 00:25:22.289 } 00:25:22.289 } 00:25:22.289 ]' 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:22.289 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:22.547 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=0094dd7a-2931-46f2-a77b-8be245323edb 00:25:22.547 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:22.547 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0094dd7a-2931-46f2-a77b-8be245323edb 00:25:22.805 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:23.063 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=0409ccb3-3efa-4bb4-980e-87198a413af8 00:25:23.063 09:19:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 0409ccb3-3efa-4bb4-980e-87198a413af8 00:25:23.320 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6426a852-bc7a-429c-abfb-4c5eff242145 00:25:23.320 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6426a852-bc7a-429c-abfb-4c5eff242145 ]] 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6426a852-bc7a-429c-abfb-4c5eff242145 5120 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6426a852-bc7a-429c-abfb-4c5eff242145 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6426a852-bc7a-429c-abfb-4c5eff242145 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6426a852-bc7a-429c-abfb-4c5eff242145 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6426a852-bc7a-429c-abfb-4c5eff242145 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:23.321 { 00:25:23.321 "name": "6426a852-bc7a-429c-abfb-4c5eff242145", 00:25:23.321 "aliases": [ 00:25:23.321 "lvs/basen1p0" 00:25:23.321 ], 00:25:23.321 "product_name": "Logical Volume", 00:25:23.321 "block_size": 4096, 00:25:23.321 "num_blocks": 5242880, 00:25:23.321 "uuid": "6426a852-bc7a-429c-abfb-4c5eff242145", 00:25:23.321 "assigned_rate_limits": { 00:25:23.321 "rw_ios_per_sec": 0, 00:25:23.321 "rw_mbytes_per_sec": 0, 00:25:23.321 "r_mbytes_per_sec": 0, 00:25:23.321 "w_mbytes_per_sec": 0 00:25:23.321 }, 00:25:23.321 "claimed": false, 00:25:23.321 "zoned": false, 00:25:23.321 "supported_io_types": { 00:25:23.321 "read": true, 00:25:23.321 "write": true, 00:25:23.321 "unmap": true, 00:25:23.321 "flush": false, 00:25:23.321 "reset": true, 00:25:23.321 "nvme_admin": false, 00:25:23.321 "nvme_io": false, 00:25:23.321 "nvme_io_md": false, 00:25:23.321 "write_zeroes": true, 00:25:23.321 "zcopy": false, 00:25:23.321 "get_zone_info": false, 00:25:23.321 "zone_management": false, 00:25:23.321 "zone_append": false, 00:25:23.321 "compare": false, 00:25:23.321 "compare_and_write": false, 00:25:23.321 "abort": false, 00:25:23.321 "seek_hole": true, 00:25:23.321 "seek_data": true, 00:25:23.321 "copy": false, 00:25:23.321 "nvme_iov_md": false 00:25:23.321 }, 00:25:23.321 "driver_specific": { 00:25:23.321 "lvol": { 00:25:23.321 "lvol_store_uuid": "0409ccb3-3efa-4bb4-980e-87198a413af8", 00:25:23.321 "base_bdev": "basen1", 00:25:23.321 "thin_provision": true, 00:25:23.321 "num_allocated_clusters": 0, 00:25:23.321 "snapshot": false, 00:25:23.321 "clone": false, 00:25:23.321 "esnap_clone": false 00:25:23.321 } 00:25:23.321 } 00:25:23.321 } 00:25:23.321 ]' 00:25:23.321 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:23.579 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:23.851 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:23.851 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:23.851 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:24.112 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:24.112 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:24.112 09:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6426a852-bc7a-429c-abfb-4c5eff242145 -c cachen1p0 --l2p_dram_limit 2 00:25:24.112 [2024-11-20 09:19:02.964063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.964115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:24.112 [2024-11-20 09:19:02.964130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:24.112 [2024-11-20 09:19:02.964138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.964192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.964202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:24.112 [2024-11-20 09:19:02.964211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:25:24.112 [2024-11-20 09:19:02.964218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.964240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:24.112 [2024-11-20 09:19:02.964988] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:24.112 [2024-11-20 09:19:02.965016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.965024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:24.112 [2024-11-20 09:19:02.965034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.779 ms 00:25:24.112 [2024-11-20 09:19:02.965041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.965148] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 354db496-d469-4609-a16c-aeb5ba375d57 00:25:24.112 [2024-11-20 09:19:02.966247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.966283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:24.112 [2024-11-20 09:19:02.966293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:24.112 [2024-11-20 09:19:02.966302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.971522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.971566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:24.112 [2024-11-20 09:19:02.971585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.176 ms 00:25:24.112 [2024-11-20 09:19:02.971600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.971647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.971658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:24.112 [2024-11-20 09:19:02.971667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:24.112 [2024-11-20 09:19:02.971677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.971744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.971762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:24.112 [2024-11-20 09:19:02.971787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:25:24.112 [2024-11-20 09:19:02.971802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.112 [2024-11-20 09:19:02.971824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:24.112 [2024-11-20 09:19:02.975460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.112 [2024-11-20 09:19:02.975493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:24.113 [2024-11-20 09:19:02.975506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.639 ms 00:25:24.113 [2024-11-20 09:19:02.975514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.113 [2024-11-20 09:19:02.975540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.113 [2024-11-20 09:19:02.975548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:24.113 [2024-11-20 09:19:02.975557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:24.113 [2024-11-20 09:19:02.975565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.113 [2024-11-20 09:19:02.975582] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:24.113 [2024-11-20 09:19:02.975714] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:25:24.113 [2024-11-20 09:19:02.975728] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:24.113 [2024-11-20 09:19:02.975739] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:25:24.113 [2024-11-20 09:19:02.975751] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:24.113 [2024-11-20 09:19:02.975759] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:24.113 [2024-11-20 09:19:02.975783] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:24.113 [2024-11-20 09:19:02.975792] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:24.113 [2024-11-20 09:19:02.975802] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:25:24.113 [2024-11-20 09:19:02.975809] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:25:24.113 [2024-11-20 09:19:02.975818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.113 [2024-11-20 09:19:02.975825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:24.113 [2024-11-20 09:19:02.975834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:25:24.113 [2024-11-20 09:19:02.975841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.113 [2024-11-20 09:19:02.975935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.113 [2024-11-20 09:19:02.975943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:24.113 [2024-11-20 09:19:02.975954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.077 ms 00:25:24.113 [2024-11-20 09:19:02.975967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.113 [2024-11-20 09:19:02.976083] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:24.113 [2024-11-20 09:19:02.976093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:24.113 [2024-11-20 09:19:02.976107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:24.113 [2024-11-20 09:19:02.976144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:24.113 [2024-11-20 09:19:02.976159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:24.113 [2024-11-20 09:19:02.976167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:24.113 [2024-11-20 09:19:02.976174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:24.113 [2024-11-20 09:19:02.976188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:24.113 [2024-11-20 09:19:02.976196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:24.113 [2024-11-20 09:19:02.976210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:25:24.113 [2024-11-20 09:19:02.976217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:24.113 [2024-11-20 09:19:02.976233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:25:24.113 [2024-11-20 09:19:02.976243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:24.113 [2024-11-20 09:19:02.976259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:24.113 [2024-11-20 09:19:02.976266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:24.113 [2024-11-20 09:19:02.976280] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:24.113 [2024-11-20 09:19:02.976289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:24.113 [2024-11-20 09:19:02.976302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:24.113 [2024-11-20 09:19:02.976309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:24.113 [2024-11-20 09:19:02.976323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:25:24.113 [2024-11-20 09:19:02.976331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:24.113 [2024-11-20 09:19:02.976347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:25:24.113 [2024-11-20 09:19:02.976353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:24.113 [2024-11-20 09:19:02.976367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:25:24.113 [2024-11-20 09:19:02.976390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:25:24.113 [2024-11-20 09:19:02.976410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:25:24.113 [2024-11-20 09:19:02.976418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976424] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:24.113 [2024-11-20 09:19:02.976433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:24.113 [2024-11-20 09:19:02.976439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:24.113 [2024-11-20 09:19:02.976456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:24.113 [2024-11-20 09:19:02.976466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:24.113 [2024-11-20 09:19:02.976472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:24.113 [2024-11-20 09:19:02.976480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:24.113 [2024-11-20 09:19:02.976487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:24.113 [2024-11-20 09:19:02.976495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:24.113 [2024-11-20 09:19:02.976504] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:24.113 [2024-11-20 09:19:02.976515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:24.114 [2024-11-20 09:19:02.976534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:25:24.114 [2024-11-20 09:19:02.976557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:25:24.114 [2024-11-20 09:19:02.976566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:25:24.114 [2024-11-20 09:19:02.976572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:25:24.114 [2024-11-20 09:19:02.976581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:25:24.114 [2024-11-20 09:19:02.976637] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:24.114 [2024-11-20 09:19:02.976646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:24.114 [2024-11-20 09:19:02.976662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:24.114 [2024-11-20 09:19:02.976669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:24.114 [2024-11-20 09:19:02.976677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:24.114 [2024-11-20 09:19:02.976684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:24.114 [2024-11-20 09:19:02.976693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:24.114 [2024-11-20 09:19:02.976700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.671 ms 00:25:24.114 [2024-11-20 09:19:02.976708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:24.114 [2024-11-20 09:19:02.976750] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:25:24.114 [2024-11-20 09:19:02.976762] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:25:26.660 [2024-11-20 09:19:05.102334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.102391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:26.660 [2024-11-20 09:19:05.102404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2125.574 ms 00:25:26.660 [2024-11-20 09:19:05.102413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.123669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.123715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:26.660 [2024-11-20 09:19:05.123725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.030 ms 00:25:26.660 [2024-11-20 09:19:05.123733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.123827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.123839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:26.660 [2024-11-20 09:19:05.123847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:26.660 [2024-11-20 09:19:05.123857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.148683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.148725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:26.660 [2024-11-20 09:19:05.148735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.781 ms 00:25:26.660 [2024-11-20 09:19:05.148743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.148775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.148787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:26.660 [2024-11-20 09:19:05.148794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:26.660 [2024-11-20 09:19:05.148802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.149150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.149167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:26.660 [2024-11-20 09:19:05.149174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.289 ms 00:25:26.660 [2024-11-20 09:19:05.149182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.149223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.149231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:26.660 [2024-11-20 09:19:05.149240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:25:26.660 [2024-11-20 09:19:05.149249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.161290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.161322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:26.660 [2024-11-20 09:19:05.161330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.027 ms 00:25:26.660 [2024-11-20 09:19:05.161338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.170552] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:26.660 [2024-11-20 09:19:05.171331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.171356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:26.660 [2024-11-20 09:19:05.171365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.924 ms 00:25:26.660 [2024-11-20 09:19:05.171372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.202422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.202474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:26.660 [2024-11-20 09:19:05.202491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.017 ms 00:25:26.660 [2024-11-20 09:19:05.202500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.202575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.202588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:26.660 [2024-11-20 09:19:05.202600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:25:26.660 [2024-11-20 09:19:05.202608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.225644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.225690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:26.660 [2024-11-20 09:19:05.225708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.981 ms 00:25:26.660 [2024-11-20 09:19:05.225716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.248259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.248299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:26.660 [2024-11-20 09:19:05.248312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.511 ms 00:25:26.660 [2024-11-20 09:19:05.248320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.660 [2024-11-20 09:19:05.248868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.660 [2024-11-20 09:19:05.248899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:26.660 [2024-11-20 09:19:05.248909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.524 ms 00:25:26.661 [2024-11-20 09:19:05.248916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.318715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.661 [2024-11-20 09:19:05.318783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:26.661 [2024-11-20 09:19:05.318808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 69.755 ms 00:25:26.661 [2024-11-20 09:19:05.318820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.343989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.661 [2024-11-20 09:19:05.344037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:26.661 [2024-11-20 09:19:05.344064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.048 ms 00:25:26.661 [2024-11-20 09:19:05.344073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.368074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.661 [2024-11-20 09:19:05.368116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:25:26.661 [2024-11-20 09:19:05.368130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.969 ms 00:25:26.661 [2024-11-20 09:19:05.368137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.391355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.661 [2024-11-20 09:19:05.391414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:26.661 [2024-11-20 09:19:05.391428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.188 ms 00:25:26.661 [2024-11-20 09:19:05.391436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.391469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.661 [2024-11-20 09:19:05.391478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:26.661 [2024-11-20 09:19:05.391491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:26.661 [2024-11-20 09:19:05.391499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.391584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:26.661 [2024-11-20 09:19:05.391594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:26.661 [2024-11-20 09:19:05.391607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:25:26.661 [2024-11-20 09:19:05.391614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:26.661 [2024-11-20 09:19:05.392506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2428.045 ms, result 0 00:25:26.661 { 00:25:26.661 "name": "ftl", 00:25:26.661 "uuid": "354db496-d469-4609-a16c-aeb5ba375d57" 00:25:26.661 } 00:25:26.661 09:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:26.919 [2024-11-20 09:19:05.599922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.919 09:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:26.919 09:19:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:27.176 [2024-11-20 09:19:06.008320] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:25:27.176 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:27.434 [2024-11-20 09:19:06.232730] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:27.434 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:27.692 Fill FTL, iteration 1 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79063 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79063 /var/tmp/spdk.tgt.sock 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79063 ']' 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.692 09:19:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:27.950 [2024-11-20 09:19:06.662358] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:27.950 [2024-11-20 09:19:06.662477] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79063 ] 00:25:27.950 [2024-11-20 09:19:06.819789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.208 [2024-11-20 09:19:06.919384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:28.773 09:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.773 09:19:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:28.774 09:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:29.031 ftln1 00:25:29.031 09:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:29.031 09:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:29.289 09:19:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79063 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79063 ']' 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 79063 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79063 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.289 killing process with pid 79063 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79063' 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 79063 00:25:29.289 09:19:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 79063 00:25:30.693 09:19:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:30.693 09:19:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:30.950 [2024-11-20 09:19:09.641974] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:30.950 [2024-11-20 09:19:09.642566] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79109 ] 00:25:30.950 [2024-11-20 09:19:09.812411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.207 [2024-11-20 09:19:09.921346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.581  [2024-11-20T09:19:12.434Z] Copying: 187/1024 [MB] (187 MBps) [2024-11-20T09:19:13.371Z] Copying: 378/1024 [MB] (191 MBps) [2024-11-20T09:19:14.305Z] Copying: 577/1024 [MB] (199 MBps) [2024-11-20T09:19:15.692Z] Copying: 784/1024 [MB] (207 MBps) [2024-11-20T09:19:15.692Z] Copying: 980/1024 [MB] (196 MBps) [2024-11-20T09:19:16.632Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:25:37.713 00:25:37.713 Calculate MD5 checksum, iteration 1 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:37.713 09:19:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:37.713 [2024-11-20 09:19:16.411554] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:37.713 [2024-11-20 09:19:16.412434] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79178 ] 00:25:37.713 [2024-11-20 09:19:16.578655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.999 [2024-11-20 09:19:16.691588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.372  [2024-11-20T09:19:18.548Z] Copying: 674/1024 [MB] (674 MBps) [2024-11-20T09:19:19.158Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:25:40.239 00:25:40.239 09:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:40.239 09:19:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:42.762 Fill FTL, iteration 2 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cf18aaecb36e7b033b4338c93cadb772 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:42.762 09:19:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:42.762 [2024-11-20 09:19:21.367734] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:42.762 [2024-11-20 09:19:21.367915] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79234 ] 00:25:42.762 [2024-11-20 09:19:21.529984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.762 [2024-11-20 09:19:21.632984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.135  [2024-11-20T09:19:24.433Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-20T09:19:25.005Z] Copying: 432/1024 [MB] (207 MBps) [2024-11-20T09:19:26.390Z] Copying: 566/1024 [MB] (134 MBps) [2024-11-20T09:19:27.334Z] Copying: 719/1024 [MB] (153 MBps) [2024-11-20T09:19:27.595Z] Copying: 922/1024 [MB] (203 MBps) [2024-11-20T09:19:28.538Z] Copying: 1024/1024 [MB] (average 183 MBps) 00:25:49.619 00:25:49.619 Calculate MD5 checksum, iteration 2 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:49.619 09:19:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:49.619 [2024-11-20 09:19:28.378211] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:25:49.619 [2024-11-20 09:19:28.378836] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79309 ] 00:25:49.881 [2024-11-20 09:19:28.539534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.881 [2024-11-20 09:19:28.641319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.303  [2024-11-20T09:19:31.165Z] Copying: 600/1024 [MB] (600 MBps) [2024-11-20T09:19:32.098Z] Copying: 1024/1024 [MB] (average 598 MBps) 00:25:53.179 00:25:53.179 09:19:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:25:53.179 09:19:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:55.725 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:55.725 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=65554b2f37b0521f3c616b140f779c20 00:25:55.725 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:55.725 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:55.725 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:55.725 [2024-11-20 09:19:34.368588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.725 [2024-11-20 09:19:34.368649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:55.725 [2024-11-20 09:19:34.368664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:25:55.725 [2024-11-20 09:19:34.368672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.725 [2024-11-20 09:19:34.368696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.725 [2024-11-20 09:19:34.368705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:55.725 [2024-11-20 09:19:34.368713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:55.725 [2024-11-20 09:19:34.368723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.725 [2024-11-20 09:19:34.368743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.725 [2024-11-20 09:19:34.368751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:55.725 [2024-11-20 09:19:34.368759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:55.725 [2024-11-20 09:19:34.368767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.725 [2024-11-20 09:19:34.368824] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.227 ms, result 0 00:25:55.725 true 00:25:55.725 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:55.725 { 00:25:55.725 "name": "ftl", 00:25:55.725 "properties": [ 00:25:55.725 { 00:25:55.725 "name": "superblock_version", 00:25:55.725 "value": 5, 00:25:55.725 "read-only": true 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "name": "base_device", 00:25:55.725 "bands": [ 00:25:55.725 { 00:25:55.725 "id": 0, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 1, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 2, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 3, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 4, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 5, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 6, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 7, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 8, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 9, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 10, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 11, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 12, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 13, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 14, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 15, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 16, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.725 }, 00:25:55.725 { 00:25:55.725 "id": 17, 00:25:55.725 "state": "FREE", 00:25:55.725 "validity": 0.0 00:25:55.726 } 00:25:55.726 ], 00:25:55.726 "read-only": true 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "name": "cache_device", 00:25:55.726 "type": "bdev", 00:25:55.726 "chunks": [ 00:25:55.726 { 00:25:55.726 "id": 0, 00:25:55.726 "state": "INACTIVE", 00:25:55.726 "utilization": 0.0 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "id": 1, 00:25:55.726 "state": "CLOSED", 00:25:55.726 "utilization": 1.0 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "id": 2, 00:25:55.726 "state": "CLOSED", 00:25:55.726 "utilization": 1.0 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "id": 3, 00:25:55.726 "state": "OPEN", 00:25:55.726 "utilization": 0.001953125 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "id": 4, 00:25:55.726 "state": "OPEN", 00:25:55.726 "utilization": 0.0 00:25:55.726 } 00:25:55.726 ], 00:25:55.726 "read-only": true 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "name": "verbose_mode", 00:25:55.726 "value": true, 00:25:55.726 "unit": "", 00:25:55.726 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:55.726 }, 00:25:55.726 { 00:25:55.726 "name": "prep_upgrade_on_shutdown", 00:25:55.726 "value": false, 00:25:55.726 "unit": "", 00:25:55.726 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:55.726 } 00:25:55.726 ] 00:25:55.726 } 00:25:55.726 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:25:55.984 [2024-11-20 09:19:34.797096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.984 [2024-11-20 09:19:34.797146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:55.984 [2024-11-20 09:19:34.797158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:55.984 [2024-11-20 09:19:34.797166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.984 [2024-11-20 09:19:34.797186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.984 [2024-11-20 09:19:34.797194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:55.984 [2024-11-20 09:19:34.797202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:55.984 [2024-11-20 09:19:34.797209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.984 [2024-11-20 09:19:34.797228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:55.984 [2024-11-20 09:19:34.797236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:55.984 [2024-11-20 09:19:34.797243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:55.984 [2024-11-20 09:19:34.797250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:55.984 [2024-11-20 09:19:34.797305] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.198 ms, result 0 00:25:55.984 true 00:25:55.984 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:25:55.984 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:25:55.984 09:19:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.241 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:25:56.241 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:25:56.241 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:25:56.499 [2024-11-20 09:19:35.217530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.499 [2024-11-20 09:19:35.217588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:25:56.499 [2024-11-20 09:19:35.217601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:25:56.499 [2024-11-20 09:19:35.217609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.499 [2024-11-20 09:19:35.217631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.499 [2024-11-20 09:19:35.217640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:25:56.499 [2024-11-20 09:19:35.217647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:25:56.499 [2024-11-20 09:19:35.217654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.499 [2024-11-20 09:19:35.217672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:56.499 [2024-11-20 09:19:35.217680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:25:56.499 [2024-11-20 09:19:35.217687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:25:56.499 [2024-11-20 09:19:35.217694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:56.499 [2024-11-20 09:19:35.217749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.209 ms, result 0 00:25:56.499 true 00:25:56.499 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:25:56.757 { 00:25:56.757 "name": "ftl", 00:25:56.757 "properties": [ 00:25:56.757 { 00:25:56.757 "name": "superblock_version", 00:25:56.757 "value": 5, 00:25:56.757 "read-only": true 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "name": "base_device", 00:25:56.757 "bands": [ 00:25:56.757 { 00:25:56.757 "id": 0, 00:25:56.757 "state": "FREE", 00:25:56.757 "validity": 0.0 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "id": 1, 00:25:56.757 "state": "FREE", 00:25:56.757 "validity": 0.0 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "id": 2, 00:25:56.757 "state": "FREE", 00:25:56.757 "validity": 0.0 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "id": 3, 00:25:56.757 "state": "FREE", 00:25:56.757 "validity": 0.0 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "id": 4, 00:25:56.757 "state": "FREE", 00:25:56.757 "validity": 0.0 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "id": 5, 00:25:56.757 "state": "FREE", 00:25:56.757 "validity": 0.0 00:25:56.757 }, 00:25:56.757 { 00:25:56.757 "id": 6, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 7, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 8, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 9, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 10, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 11, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 12, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 13, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 14, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 15, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 16, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 17, 00:25:56.758 "state": "FREE", 00:25:56.758 "validity": 0.0 00:25:56.758 } 00:25:56.758 ], 00:25:56.758 "read-only": true 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "name": "cache_device", 00:25:56.758 "type": "bdev", 00:25:56.758 "chunks": [ 00:25:56.758 { 00:25:56.758 "id": 0, 00:25:56.758 "state": "INACTIVE", 00:25:56.758 "utilization": 0.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 1, 00:25:56.758 "state": "CLOSED", 00:25:56.758 "utilization": 1.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 2, 00:25:56.758 "state": "CLOSED", 00:25:56.758 "utilization": 1.0 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 3, 00:25:56.758 "state": "OPEN", 00:25:56.758 "utilization": 0.001953125 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "id": 4, 00:25:56.758 "state": "OPEN", 00:25:56.758 "utilization": 0.0 00:25:56.758 } 00:25:56.758 ], 00:25:56.758 "read-only": true 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "name": "verbose_mode", 00:25:56.758 "value": true, 00:25:56.758 "unit": "", 00:25:56.758 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:25:56.758 }, 00:25:56.758 { 00:25:56.758 "name": "prep_upgrade_on_shutdown", 00:25:56.758 "value": true, 00:25:56.758 "unit": "", 00:25:56.758 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:25:56.758 } 00:25:56.758 ] 00:25:56.758 } 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78955 ]] 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78955 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78955 ']' 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 78955 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78955 00:25:56.758 killing process with pid 78955 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78955' 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 78955 00:25:56.758 09:19:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 78955 00:25:57.323 [2024-11-20 09:19:36.158356] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:57.323 [2024-11-20 09:19:36.173245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.323 [2024-11-20 09:19:36.173290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:57.323 [2024-11-20 09:19:36.173302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:25:57.323 [2024-11-20 09:19:36.173310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:57.323 [2024-11-20 09:19:36.173332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:57.323 [2024-11-20 09:19:36.175959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:57.323 [2024-11-20 09:19:36.176001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:57.323 [2024-11-20 09:19:36.176012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.613 ms 00:25:57.323 [2024-11-20 09:19:36.176021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.320 [2024-11-20 09:19:45.468593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.320 [2024-11-20 09:19:45.468658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:07.320 [2024-11-20 09:19:45.468672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9292.508 ms 00:26:07.321 [2024-11-20 09:19:45.468685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.469954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.469971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:07.321 [2024-11-20 09:19:45.469980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.253 ms 00:26:07.321 [2024-11-20 09:19:45.469987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.471099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.471117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:07.321 [2024-11-20 09:19:45.471126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.089 ms 00:26:07.321 [2024-11-20 09:19:45.471134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.480565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.480609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:07.321 [2024-11-20 09:19:45.480620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.388 ms 00:26:07.321 [2024-11-20 09:19:45.480627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.487131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.487173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:07.321 [2024-11-20 09:19:45.487184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.471 ms 00:26:07.321 [2024-11-20 09:19:45.487192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.487274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.487285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:07.321 [2024-11-20 09:19:45.487298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:26:07.321 [2024-11-20 09:19:45.487305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.496537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.496578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:07.321 [2024-11-20 09:19:45.496589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.215 ms 00:26:07.321 [2024-11-20 09:19:45.496596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.506096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.506131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:07.321 [2024-11-20 09:19:45.506141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.465 ms 00:26:07.321 [2024-11-20 09:19:45.506148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.516059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.516092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:07.321 [2024-11-20 09:19:45.516103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.877 ms 00:26:07.321 [2024-11-20 09:19:45.516111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.525898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.525939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:07.321 [2024-11-20 09:19:45.525949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.718 ms 00:26:07.321 [2024-11-20 09:19:45.525955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.525989] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:07.321 [2024-11-20 09:19:45.526003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:07.321 [2024-11-20 09:19:45.526014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:07.321 [2024-11-20 09:19:45.526035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:07.321 [2024-11-20 09:19:45.526044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:07.321 [2024-11-20 09:19:45.526160] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:07.321 [2024-11-20 09:19:45.526167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 354db496-d469-4609-a16c-aeb5ba375d57 00:26:07.321 [2024-11-20 09:19:45.526175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:07.321 [2024-11-20 09:19:45.526182] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:07.321 [2024-11-20 09:19:45.526189] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:07.321 [2024-11-20 09:19:45.526197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:07.321 [2024-11-20 09:19:45.526203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:07.321 [2024-11-20 09:19:45.526215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:07.321 [2024-11-20 09:19:45.526222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:07.321 [2024-11-20 09:19:45.526230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:07.321 [2024-11-20 09:19:45.526236] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:07.321 [2024-11-20 09:19:45.526243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.526253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:07.321 [2024-11-20 09:19:45.526261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.255 ms 00:26:07.321 [2024-11-20 09:19:45.526268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.539043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.539083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:07.321 [2024-11-20 09:19:45.539094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.757 ms 00:26:07.321 [2024-11-20 09:19:45.539107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.539454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:07.321 [2024-11-20 09:19:45.539462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:07.321 [2024-11-20 09:19:45.539470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.318 ms 00:26:07.321 [2024-11-20 09:19:45.539477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.321 [2024-11-20 09:19:45.582964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.321 [2024-11-20 09:19:45.583021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:07.321 [2024-11-20 09:19:45.583039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.583047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.583092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.583100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:07.322 [2024-11-20 09:19:45.583108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.583115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.583200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.583211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:07.322 [2024-11-20 09:19:45.583219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.583226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.583245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.583254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:07.322 [2024-11-20 09:19:45.583262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.583269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.662159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.662205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:07.322 [2024-11-20 09:19:45.662216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.662229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.726915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.726956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:07.322 [2024-11-20 09:19:45.726968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.726975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.727076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:07.322 [2024-11-20 09:19:45.727085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.727093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.727146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:07.322 [2024-11-20 09:19:45.727154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.727161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.727254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:07.322 [2024-11-20 09:19:45.727262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.727269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.727309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:07.322 [2024-11-20 09:19:45.727317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.727325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.727367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:07.322 [2024-11-20 09:19:45.727375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.727382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:07.322 [2024-11-20 09:19:45.727434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:07.322 [2024-11-20 09:19:45.727442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:07.322 [2024-11-20 09:19:45.727449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:07.322 [2024-11-20 09:19:45.727559] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9554.259 ms, result 0 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79520 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79520 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79520 ']' 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:11.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:11.515 09:19:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:11.515 [2024-11-20 09:19:50.275259] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:11.515 [2024-11-20 09:19:50.275622] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79520 ] 00:26:11.772 [2024-11-20 09:19:50.441162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.772 [2024-11-20 09:19:50.547931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.337 [2024-11-20 09:19:51.229676] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:12.337 [2024-11-20 09:19:51.229743] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:12.596 [2024-11-20 09:19:51.374175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.374233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:12.596 [2024-11-20 09:19:51.374246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:12.596 [2024-11-20 09:19:51.374254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.374307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.374317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:12.596 [2024-11-20 09:19:51.374325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:12.596 [2024-11-20 09:19:51.374332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.374357] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:12.596 [2024-11-20 09:19:51.375088] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:12.596 [2024-11-20 09:19:51.375110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.375118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:12.596 [2024-11-20 09:19:51.375126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.761 ms 00:26:12.596 [2024-11-20 09:19:51.375134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.376937] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:12.596 [2024-11-20 09:19:51.389105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.389146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:12.596 [2024-11-20 09:19:51.389164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.169 ms 00:26:12.596 [2024-11-20 09:19:51.389173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.389240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.389250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:12.596 [2024-11-20 09:19:51.389258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:26:12.596 [2024-11-20 09:19:51.389266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.394039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.394073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:12.596 [2024-11-20 09:19:51.394082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.708 ms 00:26:12.596 [2024-11-20 09:19:51.394089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.394147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.394156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:12.596 [2024-11-20 09:19:51.394164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:12.596 [2024-11-20 09:19:51.394171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.394214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.394223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:12.596 [2024-11-20 09:19:51.394234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:12.596 [2024-11-20 09:19:51.394241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.394263] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:12.596 [2024-11-20 09:19:51.397642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.397670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:12.596 [2024-11-20 09:19:51.397679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.386 ms 00:26:12.596 [2024-11-20 09:19:51.397688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.397716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.397724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:12.596 [2024-11-20 09:19:51.397732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:12.596 [2024-11-20 09:19:51.397739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.397759] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:12.596 [2024-11-20 09:19:51.397776] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:12.596 [2024-11-20 09:19:51.397812] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:12.596 [2024-11-20 09:19:51.397827] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:12.596 [2024-11-20 09:19:51.397940] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:12.596 [2024-11-20 09:19:51.397956] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:12.596 [2024-11-20 09:19:51.397967] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:12.596 [2024-11-20 09:19:51.397977] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:12.596 [2024-11-20 09:19:51.397985] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:12.596 [2024-11-20 09:19:51.397995] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:12.596 [2024-11-20 09:19:51.398003] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:12.596 [2024-11-20 09:19:51.398010] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:12.596 [2024-11-20 09:19:51.398017] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:12.596 [2024-11-20 09:19:51.398024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.398030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:12.596 [2024-11-20 09:19:51.398038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:26:12.596 [2024-11-20 09:19:51.398044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.596 [2024-11-20 09:19:51.398129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.596 [2024-11-20 09:19:51.398142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:12.596 [2024-11-20 09:19:51.398150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:12.597 [2024-11-20 09:19:51.398159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.597 [2024-11-20 09:19:51.398273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:12.597 [2024-11-20 09:19:51.398288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:12.597 [2024-11-20 09:19:51.398297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:12.597 [2024-11-20 09:19:51.398319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:12.597 [2024-11-20 09:19:51.398333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:12.597 [2024-11-20 09:19:51.398340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:12.597 [2024-11-20 09:19:51.398346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:12.597 [2024-11-20 09:19:51.398359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:12.597 [2024-11-20 09:19:51.398366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:12.597 [2024-11-20 09:19:51.398379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:12.597 [2024-11-20 09:19:51.398386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:12.597 [2024-11-20 09:19:51.398399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:12.597 [2024-11-20 09:19:51.398406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:12.597 [2024-11-20 09:19:51.398419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:12.597 [2024-11-20 09:19:51.398426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:12.597 [2024-11-20 09:19:51.398438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:12.597 [2024-11-20 09:19:51.398444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:12.597 [2024-11-20 09:19:51.398464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:12.597 [2024-11-20 09:19:51.398470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:12.597 [2024-11-20 09:19:51.398483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:12.597 [2024-11-20 09:19:51.398490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:12.597 [2024-11-20 09:19:51.398502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:12.597 [2024-11-20 09:19:51.398509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:12.597 [2024-11-20 09:19:51.398522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:12.597 [2024-11-20 09:19:51.398540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:12.597 [2024-11-20 09:19:51.398559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:12.597 [2024-11-20 09:19:51.398565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398571] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:12.597 [2024-11-20 09:19:51.398579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:12.597 [2024-11-20 09:19:51.398586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:12.597 [2024-11-20 09:19:51.398610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:12.597 [2024-11-20 09:19:51.398618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:12.597 [2024-11-20 09:19:51.398624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:12.597 [2024-11-20 09:19:51.398631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:12.597 [2024-11-20 09:19:51.398637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:12.597 [2024-11-20 09:19:51.398644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:12.597 [2024-11-20 09:19:51.398652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:12.597 [2024-11-20 09:19:51.398662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:12.597 [2024-11-20 09:19:51.398677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398692] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:12.597 [2024-11-20 09:19:51.398699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:12.597 [2024-11-20 09:19:51.398706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:12.597 [2024-11-20 09:19:51.398713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:12.597 [2024-11-20 09:19:51.398720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:12.597 [2024-11-20 09:19:51.398769] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:12.597 [2024-11-20 09:19:51.398776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:12.597 [2024-11-20 09:19:51.398791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:12.597 [2024-11-20 09:19:51.398798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:12.597 [2024-11-20 09:19:51.398806] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:12.597 [2024-11-20 09:19:51.398813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:12.597 [2024-11-20 09:19:51.398820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:12.597 [2024-11-20 09:19:51.398828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.608 ms 00:26:12.597 [2024-11-20 09:19:51.398834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:12.597 [2024-11-20 09:19:51.398886] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:12.597 [2024-11-20 09:19:51.398900] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:15.891 [2024-11-20 09:19:54.440358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.440441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:15.892 [2024-11-20 09:19:54.440457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3041.458 ms 00:26:15.892 [2024-11-20 09:19:54.440466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.468201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.468259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:15.892 [2024-11-20 09:19:54.468273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.512 ms 00:26:15.892 [2024-11-20 09:19:54.468283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.468397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.468415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:15.892 [2024-11-20 09:19:54.468426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:15.892 [2024-11-20 09:19:54.468435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.500926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.500977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:15.892 [2024-11-20 09:19:54.500992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.433 ms 00:26:15.892 [2024-11-20 09:19:54.501004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.501055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.501064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:15.892 [2024-11-20 09:19:54.501073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:15.892 [2024-11-20 09:19:54.501080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.501520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.501550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:15.892 [2024-11-20 09:19:54.501560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.378 ms 00:26:15.892 [2024-11-20 09:19:54.501568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.501618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.501627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:15.892 [2024-11-20 09:19:54.501636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:15.892 [2024-11-20 09:19:54.501644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.517060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.517098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:15.892 [2024-11-20 09:19:54.517110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.391 ms 00:26:15.892 [2024-11-20 09:19:54.517118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.530409] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:15.892 [2024-11-20 09:19:54.530456] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:15.892 [2024-11-20 09:19:54.530470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.530478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:15.892 [2024-11-20 09:19:54.530487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.234 ms 00:26:15.892 [2024-11-20 09:19:54.530494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.544619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.544659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:15.892 [2024-11-20 09:19:54.544671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.078 ms 00:26:15.892 [2024-11-20 09:19:54.544680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.556465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.556506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:15.892 [2024-11-20 09:19:54.556517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.737 ms 00:26:15.892 [2024-11-20 09:19:54.556526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.568895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.568936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:15.892 [2024-11-20 09:19:54.568947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.324 ms 00:26:15.892 [2024-11-20 09:19:54.568956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.569615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.569648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:15.892 [2024-11-20 09:19:54.569658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.536 ms 00:26:15.892 [2024-11-20 09:19:54.569666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.639385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.639466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:15.892 [2024-11-20 09:19:54.639481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 69.693 ms 00:26:15.892 [2024-11-20 09:19:54.639490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.650715] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:15.892 [2024-11-20 09:19:54.651705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.651759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:15.892 [2024-11-20 09:19:54.651771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.150 ms 00:26:15.892 [2024-11-20 09:19:54.651780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.651904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.651921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:15.892 [2024-11-20 09:19:54.651930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:15.892 [2024-11-20 09:19:54.651939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.651996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.652007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:15.892 [2024-11-20 09:19:54.652016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:26:15.892 [2024-11-20 09:19:54.652024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.652049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.652058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:15.892 [2024-11-20 09:19:54.652066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:15.892 [2024-11-20 09:19:54.652077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.652108] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:15.892 [2024-11-20 09:19:54.652118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.652125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:15.892 [2024-11-20 09:19:54.652133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:15.892 [2024-11-20 09:19:54.652141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.677195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.677250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:15.892 [2024-11-20 09:19:54.677263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.031 ms 00:26:15.892 [2024-11-20 09:19:54.677272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.677362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:15.892 [2024-11-20 09:19:54.677373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:15.892 [2024-11-20 09:19:54.677382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:15.892 [2024-11-20 09:19:54.677389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:15.892 [2024-11-20 09:19:54.678628] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3303.991 ms, result 0 00:26:15.892 [2024-11-20 09:19:54.693594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.892 [2024-11-20 09:19:54.709587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:15.892 [2024-11-20 09:19:54.717742] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:15.892 09:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.892 09:19:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:15.892 09:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:15.892 09:19:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:15.892 09:19:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:16.153 [2024-11-20 09:19:54.949837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.153 [2024-11-20 09:19:54.949903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:16.153 [2024-11-20 09:19:54.949918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:16.153 [2024-11-20 09:19:54.949931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.153 [2024-11-20 09:19:54.949957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.153 [2024-11-20 09:19:54.949967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:16.153 [2024-11-20 09:19:54.949975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:16.153 [2024-11-20 09:19:54.949984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.153 [2024-11-20 09:19:54.950004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.153 [2024-11-20 09:19:54.950012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:16.153 [2024-11-20 09:19:54.950021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:16.153 [2024-11-20 09:19:54.950028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.153 [2024-11-20 09:19:54.950091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.245 ms, result 0 00:26:16.153 true 00:26:16.153 09:19:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.414 { 00:26:16.414 "name": "ftl", 00:26:16.414 "properties": [ 00:26:16.414 { 00:26:16.414 "name": "superblock_version", 00:26:16.414 "value": 5, 00:26:16.414 "read-only": true 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "name": "base_device", 00:26:16.414 "bands": [ 00:26:16.414 { 00:26:16.414 "id": 0, 00:26:16.414 "state": "CLOSED", 00:26:16.414 "validity": 1.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 1, 00:26:16.414 "state": "CLOSED", 00:26:16.414 "validity": 1.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 2, 00:26:16.414 "state": "CLOSED", 00:26:16.414 "validity": 0.007843137254901933 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 3, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 4, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 5, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 6, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 7, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 8, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 9, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 10, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 11, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 12, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 13, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 14, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 15, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 16, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 17, 00:26:16.414 "state": "FREE", 00:26:16.414 "validity": 0.0 00:26:16.414 } 00:26:16.414 ], 00:26:16.414 "read-only": true 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "name": "cache_device", 00:26:16.414 "type": "bdev", 00:26:16.414 "chunks": [ 00:26:16.414 { 00:26:16.414 "id": 0, 00:26:16.414 "state": "INACTIVE", 00:26:16.414 "utilization": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 1, 00:26:16.414 "state": "OPEN", 00:26:16.414 "utilization": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 2, 00:26:16.414 "state": "OPEN", 00:26:16.414 "utilization": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 3, 00:26:16.414 "state": "FREE", 00:26:16.414 "utilization": 0.0 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "id": 4, 00:26:16.414 "state": "FREE", 00:26:16.414 "utilization": 0.0 00:26:16.414 } 00:26:16.414 ], 00:26:16.414 "read-only": true 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "name": "verbose_mode", 00:26:16.414 "value": true, 00:26:16.414 "unit": "", 00:26:16.414 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:16.414 }, 00:26:16.414 { 00:26:16.414 "name": "prep_upgrade_on_shutdown", 00:26:16.414 "value": false, 00:26:16.414 "unit": "", 00:26:16.414 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:16.414 } 00:26:16.414 ] 00:26:16.414 } 00:26:16.414 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:16.414 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.414 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:16.675 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:16.675 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:16.675 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:16.675 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:16.675 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:16.932 Validate MD5 checksum, iteration 1 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:16.932 09:19:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:16.932 [2024-11-20 09:19:55.716769] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:16.932 [2024-11-20 09:19:55.717070] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79600 ] 00:26:17.189 [2024-11-20 09:19:55.877514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.189 [2024-11-20 09:19:55.976046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.567  [2024-11-20T09:19:58.054Z] Copying: 720/1024 [MB] (720 MBps) [2024-11-20T09:19:58.988Z] Copying: 1024/1024 [MB] (average 717 MBps) 00:26:20.069 00:26:20.069 09:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:20.069 09:19:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cf18aaecb36e7b033b4338c93cadb772 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cf18aaecb36e7b033b4338c93cadb772 != \c\f\1\8\a\a\e\c\b\3\6\e\7\b\0\3\3\b\4\3\3\8\c\9\3\c\a\d\b\7\7\2 ]] 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:22.594 Validate MD5 checksum, iteration 2 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:22.594 09:20:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:22.594 [2024-11-20 09:20:01.087989] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:22.594 [2024-11-20 09:20:01.088102] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79656 ] 00:26:22.594 [2024-11-20 09:20:01.244129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.594 [2024-11-20 09:20:01.343193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.067  [2024-11-20T09:20:03.550Z] Copying: 647/1024 [MB] (647 MBps) [2024-11-20T09:20:06.834Z] Copying: 1024/1024 [MB] (average 629 MBps) 00:26:27.915 00:26:27.915 09:20:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:27.915 09:20:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=65554b2f37b0521f3c616b140f779c20 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 65554b2f37b0521f3c616b140f779c20 != \6\5\5\5\4\b\2\f\3\7\b\0\5\2\1\f\3\c\6\1\6\b\1\4\0\f\7\7\9\c\2\0 ]] 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 79520 ]] 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 79520 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79734 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79734 00:26:29.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79734 ']' 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:29.291 09:20:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:29.291 [2024-11-20 09:20:07.981725] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:29.291 [2024-11-20 09:20:07.981854] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79734 ] 00:26:29.291 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 79520 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:29.291 [2024-11-20 09:20:08.140978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.553 [2024-11-20 09:20:08.245412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.123 [2024-11-20 09:20:08.982235] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:30.123 [2024-11-20 09:20:08.982317] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:30.383 [2024-11-20 09:20:09.135374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.135443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:30.383 [2024-11-20 09:20:09.135457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:30.383 [2024-11-20 09:20:09.135466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.135529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.135541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:30.383 [2024-11-20 09:20:09.135549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:30.383 [2024-11-20 09:20:09.135557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.135584] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:30.383 [2024-11-20 09:20:09.136471] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:30.383 [2024-11-20 09:20:09.136635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.136648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:30.383 [2024-11-20 09:20:09.136659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.059 ms 00:26:30.383 [2024-11-20 09:20:09.136667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.137028] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:30.383 [2024-11-20 09:20:09.153941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.153999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:30.383 [2024-11-20 09:20:09.154012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.911 ms 00:26:30.383 [2024-11-20 09:20:09.154020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.163639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.163896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:30.383 [2024-11-20 09:20:09.163920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:26:30.383 [2024-11-20 09:20:09.163929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.164292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.164312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:30.383 [2024-11-20 09:20:09.164322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:26:30.383 [2024-11-20 09:20:09.164329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.164381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.164393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:30.383 [2024-11-20 09:20:09.164402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:26:30.383 [2024-11-20 09:20:09.164410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.164435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.164443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:30.383 [2024-11-20 09:20:09.164451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:30.383 [2024-11-20 09:20:09.164459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.164481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:30.383 [2024-11-20 09:20:09.167905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.167939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:30.383 [2024-11-20 09:20:09.167949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.429 ms 00:26:30.383 [2024-11-20 09:20:09.167957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.167996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.168004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:30.383 [2024-11-20 09:20:09.168012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:30.383 [2024-11-20 09:20:09.168020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.168056] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:30.383 [2024-11-20 09:20:09.168076] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:26:30.383 [2024-11-20 09:20:09.168111] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:30.383 [2024-11-20 09:20:09.168129] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:26:30.383 [2024-11-20 09:20:09.168232] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:30.383 [2024-11-20 09:20:09.168244] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:30.383 [2024-11-20 09:20:09.168254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:30.383 [2024-11-20 09:20:09.168263] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:30.383 [2024-11-20 09:20:09.168273] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:30.383 [2024-11-20 09:20:09.168282] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:30.383 [2024-11-20 09:20:09.168289] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:30.383 [2024-11-20 09:20:09.168296] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:30.383 [2024-11-20 09:20:09.168302] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:30.383 [2024-11-20 09:20:09.168311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.168321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:30.383 [2024-11-20 09:20:09.168329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:26:30.383 [2024-11-20 09:20:09.168336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.168421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.383 [2024-11-20 09:20:09.168429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:30.383 [2024-11-20 09:20:09.168437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:26:30.383 [2024-11-20 09:20:09.168444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.383 [2024-11-20 09:20:09.168547] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:30.383 [2024-11-20 09:20:09.168556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:30.383 [2024-11-20 09:20:09.168567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:30.383 [2024-11-20 09:20:09.168574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.383 [2024-11-20 09:20:09.168582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:30.384 [2024-11-20 09:20:09.168589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:30.384 [2024-11-20 09:20:09.168603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:30.384 [2024-11-20 09:20:09.168611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:30.384 [2024-11-20 09:20:09.168617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:30.384 [2024-11-20 09:20:09.168631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:30.384 [2024-11-20 09:20:09.168637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:30.384 [2024-11-20 09:20:09.168651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:30.384 [2024-11-20 09:20:09.168657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:30.384 [2024-11-20 09:20:09.168669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:30.384 [2024-11-20 09:20:09.168676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:30.384 [2024-11-20 09:20:09.168689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:30.384 [2024-11-20 09:20:09.168696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:30.384 [2024-11-20 09:20:09.168716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:30.384 [2024-11-20 09:20:09.168722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:30.384 [2024-11-20 09:20:09.168736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:30.384 [2024-11-20 09:20:09.168742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:30.384 [2024-11-20 09:20:09.168758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:30.384 [2024-11-20 09:20:09.168764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:30.384 [2024-11-20 09:20:09.168777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:30.384 [2024-11-20 09:20:09.168784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:30.384 [2024-11-20 09:20:09.168797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:30.384 [2024-11-20 09:20:09.168817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:30.384 [2024-11-20 09:20:09.168837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:30.384 [2024-11-20 09:20:09.168843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168849] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:30.384 [2024-11-20 09:20:09.168858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:30.384 [2024-11-20 09:20:09.168864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:30.384 [2024-11-20 09:20:09.168900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:30.384 [2024-11-20 09:20:09.168906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:30.384 [2024-11-20 09:20:09.168913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:30.384 [2024-11-20 09:20:09.168921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:30.384 [2024-11-20 09:20:09.168927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:30.384 [2024-11-20 09:20:09.168935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:30.384 [2024-11-20 09:20:09.168943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:30.384 [2024-11-20 09:20:09.168958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.168966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:30.384 [2024-11-20 09:20:09.168973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.168981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.168987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:30.384 [2024-11-20 09:20:09.168994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:30.384 [2024-11-20 09:20:09.169002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:30.384 [2024-11-20 09:20:09.169009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:30.384 [2024-11-20 09:20:09.169016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:30.384 [2024-11-20 09:20:09.169066] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:30.384 [2024-11-20 09:20:09.169075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169082] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:30.384 [2024-11-20 09:20:09.169090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:30.384 [2024-11-20 09:20:09.169097] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:30.384 [2024-11-20 09:20:09.169104] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:30.384 [2024-11-20 09:20:09.169112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.384 [2024-11-20 09:20:09.169122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:30.384 [2024-11-20 09:20:09.169130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:26:30.384 [2024-11-20 09:20:09.169136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.384 [2024-11-20 09:20:09.194779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.384 [2024-11-20 09:20:09.194832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:30.384 [2024-11-20 09:20:09.194845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.589 ms 00:26:30.384 [2024-11-20 09:20:09.194853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.384 [2024-11-20 09:20:09.194923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.384 [2024-11-20 09:20:09.194950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:30.384 [2024-11-20 09:20:09.194959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:30.384 [2024-11-20 09:20:09.194966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.384 [2024-11-20 09:20:09.226480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.384 [2024-11-20 09:20:09.226665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:30.384 [2024-11-20 09:20:09.226683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.441 ms 00:26:30.385 [2024-11-20 09:20:09.226692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.226743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.226751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:30.385 [2024-11-20 09:20:09.226760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:30.385 [2024-11-20 09:20:09.226767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.226903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.226914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:30.385 [2024-11-20 09:20:09.226923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:26:30.385 [2024-11-20 09:20:09.226930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.226969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.226977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:30.385 [2024-11-20 09:20:09.226984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:30.385 [2024-11-20 09:20:09.226992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.241428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.241588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:30.385 [2024-11-20 09:20:09.241605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.414 ms 00:26:30.385 [2024-11-20 09:20:09.241613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.241757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.241769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:30.385 [2024-11-20 09:20:09.241777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:30.385 [2024-11-20 09:20:09.241784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.271700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.271791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:30.385 [2024-11-20 09:20:09.271809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.891 ms 00:26:30.385 [2024-11-20 09:20:09.271820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.385 [2024-11-20 09:20:09.281601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.385 [2024-11-20 09:20:09.281645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:30.385 [2024-11-20 09:20:09.281662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:26:30.385 [2024-11-20 09:20:09.281670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.649 [2024-11-20 09:20:09.337454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.649 [2024-11-20 09:20:09.337508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:30.649 [2024-11-20 09:20:09.337526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 55.703 ms 00:26:30.649 [2024-11-20 09:20:09.337535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.649 [2024-11-20 09:20:09.337680] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:30.649 [2024-11-20 09:20:09.337773] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:30.649 [2024-11-20 09:20:09.337863] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:30.649 [2024-11-20 09:20:09.337969] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:30.649 [2024-11-20 09:20:09.337980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.649 [2024-11-20 09:20:09.337988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:30.649 [2024-11-20 09:20:09.337997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.385 ms 00:26:30.649 [2024-11-20 09:20:09.338005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.649 [2024-11-20 09:20:09.338076] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:30.649 [2024-11-20 09:20:09.338087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.649 [2024-11-20 09:20:09.338197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:30.649 [2024-11-20 09:20:09.338210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:30.649 [2024-11-20 09:20:09.338218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.649 [2024-11-20 09:20:09.353739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.649 [2024-11-20 09:20:09.353938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:30.649 [2024-11-20 09:20:09.353959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.493 ms 00:26:30.649 [2024-11-20 09:20:09.353967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.649 [2024-11-20 09:20:09.362936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.649 [2024-11-20 09:20:09.362974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:30.649 [2024-11-20 09:20:09.362985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:26:30.649 [2024-11-20 09:20:09.362993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:30.649 [2024-11-20 09:20:09.363092] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:26:30.649 [2024-11-20 09:20:09.363222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:30.649 [2024-11-20 09:20:09.363237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:30.649 [2024-11-20 09:20:09.363246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.131 ms 00:26:30.649 [2024-11-20 09:20:09.363254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.216 [2024-11-20 09:20:09.877246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.216 [2024-11-20 09:20:09.877449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:31.216 [2024-11-20 09:20:09.877472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 513.050 ms 00:26:31.216 [2024-11-20 09:20:09.877482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.216 [2024-11-20 09:20:09.881428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.216 [2024-11-20 09:20:09.881462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:31.216 [2024-11-20 09:20:09.881472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.044 ms 00:26:31.216 [2024-11-20 09:20:09.881481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.216 [2024-11-20 09:20:09.881900] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:26:31.216 [2024-11-20 09:20:09.881930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.216 [2024-11-20 09:20:09.881938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:31.216 [2024-11-20 09:20:09.881948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:26:31.216 [2024-11-20 09:20:09.881956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.216 [2024-11-20 09:20:09.881985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.216 [2024-11-20 09:20:09.881993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:31.216 [2024-11-20 09:20:09.882001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:31.216 [2024-11-20 09:20:09.882008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.216 [2024-11-20 09:20:09.882047] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 518.953 ms, result 0 00:26:31.216 [2024-11-20 09:20:09.882083] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:26:31.216 [2024-11-20 09:20:09.882153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.216 [2024-11-20 09:20:09.882163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:26:31.216 [2024-11-20 09:20:09.882170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:26:31.216 [2024-11-20 09:20:09.882177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.539582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.539645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:26:31.876 [2024-11-20 09:20:10.539659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 656.387 ms 00:26:31.876 [2024-11-20 09:20:10.539667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.543470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.543644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:26:31.876 [2024-11-20 09:20:10.543661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.875 ms 00:26:31.876 [2024-11-20 09:20:10.543669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.544014] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:26:31.876 [2024-11-20 09:20:10.544042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.544052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:26:31.876 [2024-11-20 09:20:10.544062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:26:31.876 [2024-11-20 09:20:10.544070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.544098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.544106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:26:31.876 [2024-11-20 09:20:10.544114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:31.876 [2024-11-20 09:20:10.544121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.544155] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 662.065 ms, result 0 00:26:31.876 [2024-11-20 09:20:10.544194] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:31.876 [2024-11-20 09:20:10.544204] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:31.876 [2024-11-20 09:20:10.544214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.544222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:31.876 [2024-11-20 09:20:10.544230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1181.137 ms 00:26:31.876 [2024-11-20 09:20:10.544237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.544268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.544276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:31.876 [2024-11-20 09:20:10.544287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:31.876 [2024-11-20 09:20:10.544294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.555212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:31.876 [2024-11-20 09:20:10.555338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.555350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:31.876 [2024-11-20 09:20:10.555360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.028 ms 00:26:31.876 [2024-11-20 09:20:10.555368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.556104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.556128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:26:31.876 [2024-11-20 09:20:10.556141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.640 ms 00:26:31.876 [2024-11-20 09:20:10.556149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.558367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.558496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:31.876 [2024-11-20 09:20:10.558511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.200 ms 00:26:31.876 [2024-11-20 09:20:10.558519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.558561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.558570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:26:31.876 [2024-11-20 09:20:10.558578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:31.876 [2024-11-20 09:20:10.558589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.558694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.558704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:31.876 [2024-11-20 09:20:10.558712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:26:31.876 [2024-11-20 09:20:10.558719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.876 [2024-11-20 09:20:10.558738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.876 [2024-11-20 09:20:10.558746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:31.877 [2024-11-20 09:20:10.558754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:31.877 [2024-11-20 09:20:10.558761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.877 [2024-11-20 09:20:10.558787] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:31.877 [2024-11-20 09:20:10.558799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.877 [2024-11-20 09:20:10.558806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:31.877 [2024-11-20 09:20:10.558814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:31.877 [2024-11-20 09:20:10.558821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.877 [2024-11-20 09:20:10.558890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:31.877 [2024-11-20 09:20:10.558900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:31.877 [2024-11-20 09:20:10.558908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:26:31.877 [2024-11-20 09:20:10.558915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:31.877 [2024-11-20 09:20:10.559886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1424.057 ms, result 0 00:26:31.877 [2024-11-20 09:20:10.575667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.877 [2024-11-20 09:20:10.591672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:26:31.877 [2024-11-20 09:20:10.599840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:31.877 Validate MD5 checksum, iteration 1 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:31.877 09:20:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:31.877 [2024-11-20 09:20:10.694862] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:31.877 [2024-11-20 09:20:10.695169] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79767 ] 00:26:32.134 [2024-11-20 09:20:10.855023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.134 [2024-11-20 09:20:10.956839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.038  [2024-11-20T09:20:12.957Z] Copying: 675/1024 [MB] (675 MBps) [2024-11-20T09:20:14.333Z] Copying: 1024/1024 [MB] (average 690 MBps) 00:26:35.414 00:26:35.414 09:20:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:35.414 09:20:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:37.951 Validate MD5 checksum, iteration 2 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cf18aaecb36e7b033b4338c93cadb772 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cf18aaecb36e7b033b4338c93cadb772 != \c\f\1\8\a\a\e\c\b\3\6\e\7\b\0\3\3\b\4\3\3\8\c\9\3\c\a\d\b\7\7\2 ]] 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:26:37.951 09:20:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:37.951 [2024-11-20 09:20:16.326168] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:37.951 [2024-11-20 09:20:16.326479] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79830 ] 00:26:37.951 [2024-11-20 09:20:16.486224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.951 [2024-11-20 09:20:16.589935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.337  [2024-11-20T09:20:18.829Z] Copying: 655/1024 [MB] (655 MBps) [2024-11-20T09:20:19.818Z] Copying: 1024/1024 [MB] (average 658 MBps) 00:26:40.899 00:26:40.899 09:20:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:40.899 09:20:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=65554b2f37b0521f3c616b140f779c20 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 65554b2f37b0521f3c616b140f779c20 != \6\5\5\5\4\b\2\f\3\7\b\0\5\2\1\f\3\c\6\1\6\b\1\4\0\f\7\7\9\c\2\0 ]] 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79734 ]] 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79734 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79734 ']' 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 79734 00:26:43.441 09:20:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79734 00:26:43.441 killing process with pid 79734 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79734' 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 79734 00:26:43.441 09:20:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 79734 00:26:44.015 [2024-11-20 09:20:22.799388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:26:44.015 [2024-11-20 09:20:22.815304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.015 [2024-11-20 09:20:22.815368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:44.015 [2024-11-20 09:20:22.815383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:44.015 [2024-11-20 09:20:22.815392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.015 [2024-11-20 09:20:22.815417] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:44.016 [2024-11-20 09:20:22.818411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.818599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:44.016 [2024-11-20 09:20:22.818621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.979 ms 00:26:44.016 [2024-11-20 09:20:22.818636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.818916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.818928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:44.016 [2024-11-20 09:20:22.818938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:26:44.016 [2024-11-20 09:20:22.818946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.820764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.820803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:44.016 [2024-11-20 09:20:22.820814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.799 ms 00:26:44.016 [2024-11-20 09:20:22.820825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.822014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.822036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:26:44.016 [2024-11-20 09:20:22.822047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.127 ms 00:26:44.016 [2024-11-20 09:20:22.822057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.833158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.833333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:44.016 [2024-11-20 09:20:22.833354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.040 ms 00:26:44.016 [2024-11-20 09:20:22.833363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.839295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.839357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:44.016 [2024-11-20 09:20:22.839371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.884 ms 00:26:44.016 [2024-11-20 09:20:22.839380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.839476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.839487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:44.016 [2024-11-20 09:20:22.839497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:44.016 [2024-11-20 09:20:22.839505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.850467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.850510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:26:44.016 [2024-11-20 09:20:22.850523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.935 ms 00:26:44.016 [2024-11-20 09:20:22.850532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.861281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.861324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:26:44.016 [2024-11-20 09:20:22.861335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.704 ms 00:26:44.016 [2024-11-20 09:20:22.861343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.871738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.871780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:44.016 [2024-11-20 09:20:22.871791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.348 ms 00:26:44.016 [2024-11-20 09:20:22.871799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.882040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.016 [2024-11-20 09:20:22.882083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:44.016 [2024-11-20 09:20:22.882095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.145 ms 00:26:44.016 [2024-11-20 09:20:22.882102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.016 [2024-11-20 09:20:22.882146] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:44.016 [2024-11-20 09:20:22.882162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:44.016 [2024-11-20 09:20:22.882174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:44.016 [2024-11-20 09:20:22.882183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:44.016 [2024-11-20 09:20:22.882191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:44.016 [2024-11-20 09:20:22.882309] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:44.016 [2024-11-20 09:20:22.882317] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 354db496-d469-4609-a16c-aeb5ba375d57 00:26:44.016 [2024-11-20 09:20:22.882326] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:44.016 [2024-11-20 09:20:22.882333] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:44.016 [2024-11-20 09:20:22.882340] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:44.017 [2024-11-20 09:20:22.882348] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:44.017 [2024-11-20 09:20:22.882355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:44.017 [2024-11-20 09:20:22.882363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:44.017 [2024-11-20 09:20:22.882370] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:44.017 [2024-11-20 09:20:22.882376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:44.017 [2024-11-20 09:20:22.882384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:44.017 [2024-11-20 09:20:22.882394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.017 [2024-11-20 09:20:22.882403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:44.017 [2024-11-20 09:20:22.882417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.250 ms 00:26:44.017 [2024-11-20 09:20:22.882425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.017 [2024-11-20 09:20:22.895898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.017 [2024-11-20 09:20:22.895936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:44.017 [2024-11-20 09:20:22.895948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.453 ms 00:26:44.017 [2024-11-20 09:20:22.895957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.017 [2024-11-20 09:20:22.896352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:44.017 [2024-11-20 09:20:22.896366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:44.017 [2024-11-20 09:20:22.896375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.371 ms 00:26:44.017 [2024-11-20 09:20:22.896383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:22.942655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:22.942938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:44.276 [2024-11-20 09:20:22.943073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:22.943103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:22.943179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:22.943202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:44.276 [2024-11-20 09:20:22.943222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:22.943241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:22.943393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:22.943594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:44.276 [2024-11-20 09:20:22.943621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:22.943641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:22.943680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:22.943722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:44.276 [2024-11-20 09:20:22.943743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:22.943762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.029526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.029730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:44.276 [2024-11-20 09:20:23.029792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.029815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.100617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.100887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:44.276 [2024-11-20 09:20:23.100951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.100976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.101090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.101115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:44.276 [2024-11-20 09:20:23.101136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.101155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.101232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.101402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:44.276 [2024-11-20 09:20:23.101433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.101462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.101593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.101619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:44.276 [2024-11-20 09:20:23.101640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.101716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.101779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.101791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:44.276 [2024-11-20 09:20:23.101800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.101813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.101857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.101867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:44.276 [2024-11-20 09:20:23.101900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.101909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.101959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:44.276 [2024-11-20 09:20:23.101970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:44.276 [2024-11-20 09:20:23.101982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:44.276 [2024-11-20 09:20:23.101990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:44.276 [2024-11-20 09:20:23.102127] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 286.786 ms, result 0 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:45.218 Remove shared memory files 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:45.218 09:20:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:45.218 09:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:45.218 09:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79520 00:26:45.218 09:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:45.218 09:20:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:45.218 ************************************ 00:26:45.218 END TEST ftl_upgrade_shutdown 00:26:45.218 ************************************ 00:26:45.218 00:26:45.218 real 1m24.542s 00:26:45.218 user 1m58.572s 00:26:45.218 sys 0m19.543s 00:26:45.218 09:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:45.218 09:20:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:45.218 Process with pid 72443 is not found 00:26:45.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:45.218 09:20:24 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:26:45.218 09:20:24 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:26:45.218 09:20:24 ftl -- ftl/ftl.sh@14 -- # killprocess 72443 00:26:45.218 09:20:24 ftl -- common/autotest_common.sh@954 -- # '[' -z 72443 ']' 00:26:45.218 09:20:24 ftl -- common/autotest_common.sh@958 -- # kill -0 72443 00:26:45.218 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72443) - No such process 00:26:45.218 09:20:24 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 72443 is not found' 00:26:45.218 09:20:24 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:26:45.218 09:20:24 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79950 00:26:45.218 09:20:24 ftl -- ftl/ftl.sh@20 -- # waitforlisten 79950 00:26:45.219 09:20:24 ftl -- common/autotest_common.sh@835 -- # '[' -z 79950 ']' 00:26:45.219 09:20:24 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:45.219 09:20:24 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:45.219 09:20:24 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.219 09:20:24 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:45.219 09:20:24 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.219 09:20:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:45.479 [2024-11-20 09:20:24.137349] Starting SPDK v25.01-pre git sha1 4f0cbdcd1 / DPDK 24.03.0 initialization... 00:26:45.479 [2024-11-20 09:20:24.137509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79950 ] 00:26:45.479 [2024-11-20 09:20:24.300354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.821 [2024-11-20 09:20:24.430978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.393 09:20:25 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.393 09:20:25 ftl -- common/autotest_common.sh@868 -- # return 0 00:26:46.393 09:20:25 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:46.654 nvme0n1 00:26:46.654 09:20:25 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:26:46.654 09:20:25 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:46.654 09:20:25 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:46.916 09:20:25 ftl -- ftl/common.sh@28 -- # stores=0409ccb3-3efa-4bb4-980e-87198a413af8 00:26:46.916 09:20:25 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:26:46.916 09:20:25 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0409ccb3-3efa-4bb4-980e-87198a413af8 00:26:47.177 09:20:25 ftl -- ftl/ftl.sh@23 -- # killprocess 79950 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@954 -- # '[' -z 79950 ']' 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@958 -- # kill -0 79950 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@959 -- # uname 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79950 00:26:47.177 killing process with pid 79950 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79950' 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@973 -- # kill 79950 00:26:47.177 09:20:25 ftl -- common/autotest_common.sh@978 -- # wait 79950 00:26:48.560 09:20:27 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:48.821 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:48.821 Waiting for block devices as requested 00:26:48.821 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:26:48.821 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:26:49.083 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:26:49.083 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:26:54.379 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:26:54.379 09:20:32 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:26:54.379 Remove shared memory files 00:26:54.379 09:20:32 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:54.379 09:20:32 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:26:54.379 09:20:32 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:26:54.379 09:20:32 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:26:54.379 09:20:32 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:54.380 09:20:32 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:26:54.380 ************************************ 00:26:54.380 END TEST ftl 00:26:54.380 ************************************ 00:26:54.380 00:26:54.380 real 10m56.285s 00:26:54.380 user 13m8.026s 00:26:54.380 sys 1m17.850s 00:26:54.380 09:20:32 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.380 09:20:32 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:54.380 09:20:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:54.380 09:20:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:54.380 09:20:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:54.380 09:20:33 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:54.380 09:20:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:54.380 09:20:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:54.380 09:20:33 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:54.380 09:20:33 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:26:54.380 09:20:33 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:26:54.380 09:20:33 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:26:54.380 09:20:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:54.380 09:20:33 -- common/autotest_common.sh@10 -- # set +x 00:26:54.380 09:20:33 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:26:54.380 09:20:33 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:26:54.380 09:20:33 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:26:54.380 09:20:33 -- common/autotest_common.sh@10 -- # set +x 00:26:55.788 INFO: APP EXITING 00:26:55.788 INFO: killing all VMs 00:26:55.788 INFO: killing vhost app 00:26:55.788 INFO: EXIT DONE 00:26:56.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:56.308 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:26:56.308 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:26:56.308 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:26:56.308 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:26:56.876 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:57.136 Cleaning 00:26:57.136 Removing: /var/run/dpdk/spdk0/config 00:26:57.136 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:57.136 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:57.136 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:57.136 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:57.136 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:57.136 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:57.136 Removing: /var/run/dpdk/spdk0 00:26:57.136 Removing: /var/run/dpdk/spdk_pid56914 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57111 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57329 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57422 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57461 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57584 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57602 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57790 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57876 00:26:57.136 Removing: /var/run/dpdk/spdk_pid57967 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58072 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58164 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58204 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58241 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58306 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58412 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58837 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58890 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58953 00:26:57.136 Removing: /var/run/dpdk/spdk_pid58969 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59077 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59093 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59206 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59222 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59286 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59304 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59362 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59380 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59559 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59601 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59684 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59862 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59951 00:26:57.136 Removing: /var/run/dpdk/spdk_pid59988 00:26:57.136 Removing: /var/run/dpdk/spdk_pid60448 00:26:57.136 Removing: /var/run/dpdk/spdk_pid60548 00:26:57.136 Removing: /var/run/dpdk/spdk_pid60660 00:26:57.136 Removing: /var/run/dpdk/spdk_pid60713 00:26:57.136 Removing: /var/run/dpdk/spdk_pid60743 00:26:57.136 Removing: /var/run/dpdk/spdk_pid60817 00:26:57.136 Removing: /var/run/dpdk/spdk_pid61443 00:26:57.136 Removing: /var/run/dpdk/spdk_pid61480 00:26:57.136 Removing: /var/run/dpdk/spdk_pid61984 00:26:57.136 Removing: /var/run/dpdk/spdk_pid62082 00:26:57.136 Removing: /var/run/dpdk/spdk_pid62197 00:26:57.136 Removing: /var/run/dpdk/spdk_pid62267 00:26:57.136 Removing: /var/run/dpdk/spdk_pid62292 00:26:57.136 Removing: /var/run/dpdk/spdk_pid62318 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64162 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64288 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64296 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64315 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64356 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64360 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64372 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64418 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64422 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64434 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64479 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64483 00:26:57.136 Removing: /var/run/dpdk/spdk_pid64495 00:26:57.136 Removing: /var/run/dpdk/spdk_pid65868 00:26:57.136 Removing: /var/run/dpdk/spdk_pid65966 00:26:57.136 Removing: /var/run/dpdk/spdk_pid67369 00:26:57.136 Removing: /var/run/dpdk/spdk_pid68739 00:26:57.136 Removing: /var/run/dpdk/spdk_pid68840 00:26:57.136 Removing: /var/run/dpdk/spdk_pid68920 00:26:57.136 Removing: /var/run/dpdk/spdk_pid69018 00:26:57.136 Removing: /var/run/dpdk/spdk_pid69140 00:26:57.136 Removing: /var/run/dpdk/spdk_pid69215 00:26:57.136 Removing: /var/run/dpdk/spdk_pid69360 00:26:57.136 Removing: /var/run/dpdk/spdk_pid69720 00:26:57.136 Removing: /var/run/dpdk/spdk_pid69757 00:26:57.136 Removing: /var/run/dpdk/spdk_pid70193 00:26:57.136 Removing: /var/run/dpdk/spdk_pid70377 00:26:57.136 Removing: /var/run/dpdk/spdk_pid70471 00:26:57.136 Removing: /var/run/dpdk/spdk_pid70585 00:26:57.396 Removing: /var/run/dpdk/spdk_pid70636 00:26:57.396 Removing: /var/run/dpdk/spdk_pid70656 00:26:57.396 Removing: /var/run/dpdk/spdk_pid70960 00:26:57.396 Removing: /var/run/dpdk/spdk_pid71009 00:26:57.396 Removing: /var/run/dpdk/spdk_pid71088 00:26:57.396 Removing: /var/run/dpdk/spdk_pid71482 00:26:57.396 Removing: /var/run/dpdk/spdk_pid71632 00:26:57.396 Removing: /var/run/dpdk/spdk_pid72443 00:26:57.396 Removing: /var/run/dpdk/spdk_pid72570 00:26:57.396 Removing: /var/run/dpdk/spdk_pid72741 00:26:57.396 Removing: /var/run/dpdk/spdk_pid72849 00:26:57.396 Removing: /var/run/dpdk/spdk_pid73167 00:26:57.396 Removing: /var/run/dpdk/spdk_pid73424 00:26:57.396 Removing: /var/run/dpdk/spdk_pid73781 00:26:57.396 Removing: /var/run/dpdk/spdk_pid73965 00:26:57.396 Removing: /var/run/dpdk/spdk_pid74156 00:26:57.396 Removing: /var/run/dpdk/spdk_pid74203 00:26:57.396 Removing: /var/run/dpdk/spdk_pid74462 00:26:57.396 Removing: /var/run/dpdk/spdk_pid74487 00:26:57.396 Removing: /var/run/dpdk/spdk_pid74544 00:26:57.396 Removing: /var/run/dpdk/spdk_pid74853 00:26:57.396 Removing: /var/run/dpdk/spdk_pid75088 00:26:57.396 Removing: /var/run/dpdk/spdk_pid76169 00:26:57.396 Removing: /var/run/dpdk/spdk_pid76517 00:26:57.396 Removing: /var/run/dpdk/spdk_pid76806 00:26:57.396 Removing: /var/run/dpdk/spdk_pid77305 00:26:57.396 Removing: /var/run/dpdk/spdk_pid77458 00:26:57.396 Removing: /var/run/dpdk/spdk_pid77556 00:26:57.396 Removing: /var/run/dpdk/spdk_pid77958 00:26:57.396 Removing: /var/run/dpdk/spdk_pid78023 00:26:57.396 Removing: /var/run/dpdk/spdk_pid78337 00:26:57.396 Removing: /var/run/dpdk/spdk_pid78607 00:26:57.396 Removing: /var/run/dpdk/spdk_pid78955 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79063 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79109 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79178 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79234 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79309 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79520 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79600 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79656 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79734 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79767 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79830 00:26:57.396 Removing: /var/run/dpdk/spdk_pid79950 00:26:57.396 Clean 00:26:57.396 09:20:36 -- common/autotest_common.sh@1453 -- # return 0 00:26:57.396 09:20:36 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:26:57.396 09:20:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.396 09:20:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.396 09:20:36 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:26:57.396 09:20:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.396 09:20:36 -- common/autotest_common.sh@10 -- # set +x 00:26:57.396 09:20:36 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:57.396 09:20:36 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:57.396 09:20:36 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:57.396 09:20:36 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:26:57.396 09:20:36 -- spdk/autotest.sh@398 -- # hostname 00:26:57.396 09:20:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:57.655 geninfo: WARNING: invalid characters removed from testname! 00:27:24.223 09:21:00 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:25.609 09:21:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:28.139 09:21:06 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:30.039 09:21:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:32.057 09:21:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:34.604 09:21:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:37.151 09:21:15 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:37.151 09:21:15 -- spdk/autorun.sh@1 -- $ timing_finish 00:27:37.151 09:21:15 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:27:37.151 09:21:15 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:37.151 09:21:15 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:27:37.151 09:21:15 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:37.151 + [[ -n 5023 ]] 00:27:37.151 + sudo kill 5023 00:27:37.160 [Pipeline] } 00:27:37.174 [Pipeline] // timeout 00:27:37.179 [Pipeline] } 00:27:37.192 [Pipeline] // stage 00:27:37.197 [Pipeline] } 00:27:37.211 [Pipeline] // catchError 00:27:37.220 [Pipeline] stage 00:27:37.222 [Pipeline] { (Stop VM) 00:27:37.232 [Pipeline] sh 00:27:37.511 + vagrant halt 00:27:40.056 ==> default: Halting domain... 00:27:45.362 [Pipeline] sh 00:27:45.683 + vagrant destroy -f 00:27:48.222 ==> default: Removing domain... 00:27:48.493 [Pipeline] sh 00:27:48.841 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:27:48.850 [Pipeline] } 00:27:48.864 [Pipeline] // stage 00:27:48.869 [Pipeline] } 00:27:48.883 [Pipeline] // dir 00:27:48.889 [Pipeline] } 00:27:48.902 [Pipeline] // wrap 00:27:48.908 [Pipeline] } 00:27:48.920 [Pipeline] // catchError 00:27:48.928 [Pipeline] stage 00:27:48.930 [Pipeline] { (Epilogue) 00:27:48.941 [Pipeline] sh 00:27:49.224 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:55.814 [Pipeline] catchError 00:27:55.816 [Pipeline] { 00:27:55.828 [Pipeline] sh 00:27:56.106 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:56.106 Artifacts sizes are good 00:27:56.113 [Pipeline] } 00:27:56.127 [Pipeline] // catchError 00:27:56.137 [Pipeline] archiveArtifacts 00:27:56.143 Archiving artifacts 00:27:56.247 [Pipeline] cleanWs 00:27:56.295 [WS-CLEANUP] Deleting project workspace... 00:27:56.295 [WS-CLEANUP] Deferred wipeout is used... 00:27:56.301 [WS-CLEANUP] done 00:27:56.302 [Pipeline] } 00:27:56.319 [Pipeline] // stage 00:27:56.323 [Pipeline] } 00:27:56.336 [Pipeline] // node 00:27:56.341 [Pipeline] End of Pipeline 00:27:56.377 Finished: SUCCESS