00:00:00.000 Started by upstream project "autotest-per-patch" build number 132298 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.132 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.133 The recommended git tool is: git 00:00:00.133 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.192 Fetching changes from the remote Git repository 00:00:00.194 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.246 Using shallow fetch with depth 1 00:00:00.246 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.246 > git --version # timeout=10 00:00:00.288 > git --version # 'git version 2.39.2' 00:00:00.288 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.306 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.306 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.927 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.937 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.948 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.948 > git config core.sparsecheckout # timeout=10 00:00:05.959 > git read-tree -mu HEAD # timeout=10 00:00:05.973 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.989 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.989 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.070 [Pipeline] Start of Pipeline 00:00:06.085 [Pipeline] library 00:00:06.087 Loading library shm_lib@master 00:00:06.087 Library shm_lib@master is cached. Copying from home. 00:00:06.101 [Pipeline] node 00:00:06.118 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.119 [Pipeline] { 00:00:06.128 [Pipeline] catchError 00:00:06.129 [Pipeline] { 00:00:06.139 [Pipeline] wrap 00:00:06.146 [Pipeline] { 00:00:06.151 [Pipeline] stage 00:00:06.152 [Pipeline] { (Prologue) 00:00:06.165 [Pipeline] echo 00:00:06.167 Node: VM-host-WFP1 00:00:06.172 [Pipeline] cleanWs 00:00:06.180 [WS-CLEANUP] Deleting project workspace... 00:00:06.180 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.185 [WS-CLEANUP] done 00:00:06.421 [Pipeline] setCustomBuildProperty 00:00:06.496 [Pipeline] httpRequest 00:00:09.527 [Pipeline] echo 00:00:09.528 Sorcerer 10.211.164.101 is dead 00:00:09.537 [Pipeline] httpRequest 00:00:12.216 [Pipeline] echo 00:00:12.218 Sorcerer 10.211.164.101 is alive 00:00:12.230 [Pipeline] retry 00:00:12.233 [Pipeline] { 00:00:12.248 [Pipeline] httpRequest 00:00:12.253 HttpMethod: GET 00:00:12.253 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.254 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:12.266 Response Code: HTTP/1.1 200 OK 00:00:12.267 Success: Status code 200 is in the accepted range: 200,404 00:00:12.268 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.521 [Pipeline] } 00:00:14.538 [Pipeline] // retry 00:00:14.545 [Pipeline] sh 00:00:14.826 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.842 [Pipeline] httpRequest 00:00:17.860 [Pipeline] echo 00:00:17.862 Sorcerer 10.211.164.101 is dead 00:00:17.873 [Pipeline] httpRequest 00:00:18.721 [Pipeline] echo 00:00:18.723 Sorcerer 10.211.164.101 is alive 00:00:18.735 [Pipeline] retry 00:00:18.738 [Pipeline] { 00:00:18.755 [Pipeline] httpRequest 00:00:18.760 HttpMethod: GET 00:00:18.761 URL: http://10.211.164.101/packages/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:18.761 Sending request to url: http://10.211.164.101/packages/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:18.763 Response Code: HTTP/1.1 200 OK 00:00:18.764 Success: Status code 200 is in the accepted range: 200,404 00:00:18.765 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:35.976 [Pipeline] } 00:00:35.995 [Pipeline] // retry 00:00:36.003 [Pipeline] sh 00:00:36.289 + tar --no-same-owner -xf spdk_f1a181ac34ff9dc22c85383e5b547bbedfdae1bf.tar.gz 00:00:38.857 [Pipeline] sh 00:00:39.141 + git -C spdk log --oneline -n5 00:00:39.141 f1a181ac3 test/scheduler: Drop cpufreq_high_prio[@] 00:00:39.141 e081e4a1a test/scheduler: Calculate freq turbo range based on sysfs 00:00:39.141 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:39.141 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:39.141 4bcab9fb9 correct kick for CQ full case 00:00:39.161 [Pipeline] writeFile 00:00:39.175 [Pipeline] sh 00:00:39.455 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:39.466 [Pipeline] sh 00:00:39.744 + cat autorun-spdk.conf 00:00:39.744 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:39.744 SPDK_TEST_NVME=1 00:00:39.744 SPDK_TEST_FTL=1 00:00:39.744 SPDK_TEST_ISAL=1 00:00:39.744 SPDK_RUN_ASAN=1 00:00:39.744 SPDK_RUN_UBSAN=1 00:00:39.744 SPDK_TEST_XNVME=1 00:00:39.744 SPDK_TEST_NVME_FDP=1 00:00:39.744 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:39.751 RUN_NIGHTLY=0 00:00:39.753 [Pipeline] } 00:00:39.765 [Pipeline] // stage 00:00:39.781 [Pipeline] stage 00:00:39.783 [Pipeline] { (Run VM) 00:00:39.795 [Pipeline] sh 00:00:40.074 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:40.074 + echo 'Start stage prepare_nvme.sh' 00:00:40.074 Start stage prepare_nvme.sh 00:00:40.074 + [[ -n 7 ]] 00:00:40.074 + disk_prefix=ex7 00:00:40.074 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:40.074 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:40.074 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:40.074 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:40.074 ++ SPDK_TEST_NVME=1 00:00:40.074 ++ SPDK_TEST_FTL=1 00:00:40.074 ++ SPDK_TEST_ISAL=1 00:00:40.074 ++ SPDK_RUN_ASAN=1 00:00:40.074 ++ SPDK_RUN_UBSAN=1 00:00:40.074 ++ SPDK_TEST_XNVME=1 00:00:40.074 ++ SPDK_TEST_NVME_FDP=1 00:00:40.074 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:40.074 ++ RUN_NIGHTLY=0 00:00:40.074 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:40.074 + nvme_files=() 00:00:40.074 + declare -A nvme_files 00:00:40.074 + backend_dir=/var/lib/libvirt/images/backends 00:00:40.074 + nvme_files['nvme.img']=5G 00:00:40.074 + nvme_files['nvme-cmb.img']=5G 00:00:40.074 + nvme_files['nvme-multi0.img']=4G 00:00:40.074 + nvme_files['nvme-multi1.img']=4G 00:00:40.074 + nvme_files['nvme-multi2.img']=4G 00:00:40.074 + nvme_files['nvme-openstack.img']=8G 00:00:40.074 + nvme_files['nvme-zns.img']=5G 00:00:40.074 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:40.074 + (( SPDK_TEST_FTL == 1 )) 00:00:40.074 + nvme_files["nvme-ftl.img"]=6G 00:00:40.074 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:40.074 + nvme_files["nvme-fdp.img"]=1G 00:00:40.074 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:40.074 + for nvme in "${!nvme_files[@]}" 00:00:40.074 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:40.074 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.074 + for nvme in "${!nvme_files[@]}" 00:00:40.074 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-ftl.img -s 6G 00:00:40.074 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:40.074 + for nvme in "${!nvme_files[@]}" 00:00:40.074 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:40.074 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.074 + for nvme in "${!nvme_files[@]}" 00:00:40.074 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:40.332 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:40.332 + for nvme in "${!nvme_files[@]}" 00:00:40.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:40.332 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:40.332 + for nvme in "${!nvme_files[@]}" 00:00:40.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:40.332 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.332 + for nvme in "${!nvme_files[@]}" 00:00:40.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:40.332 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:40.332 + for nvme in "${!nvme_files[@]}" 00:00:40.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-fdp.img -s 1G 00:00:40.332 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:40.332 + for nvme in "${!nvme_files[@]}" 00:00:40.332 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:41.266 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:41.266 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:41.266 + echo 'End stage prepare_nvme.sh' 00:00:41.266 End stage prepare_nvme.sh 00:00:41.273 [Pipeline] sh 00:00:41.548 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:41.548 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex7-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex7-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:41.548 00:00:41.548 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:41.548 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:41.548 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:41.548 HELP=0 00:00:41.548 DRY_RUN=0 00:00:41.548 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,/var/lib/libvirt/images/backends/ex7-nvme-fdp.img, 00:00:41.548 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:41.548 NVME_AUTO_CREATE=0 00:00:41.548 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,, 00:00:41.548 NVME_CMB=,,,, 00:00:41.548 NVME_PMR=,,,, 00:00:41.548 NVME_ZNS=,,,, 00:00:41.548 NVME_MS=true,,,, 00:00:41.548 NVME_FDP=,,,on, 00:00:41.548 SPDK_VAGRANT_DISTRO=fedora39 00:00:41.548 SPDK_VAGRANT_VMCPU=10 00:00:41.548 SPDK_VAGRANT_VMRAM=12288 00:00:41.548 SPDK_VAGRANT_PROVIDER=libvirt 00:00:41.548 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:41.548 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:41.548 SPDK_OPENSTACK_NETWORK=0 00:00:41.548 VAGRANT_PACKAGE_BOX=0 00:00:41.548 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:41.548 FORCE_DISTRO=true 00:00:41.548 VAGRANT_BOX_VERSION= 00:00:41.548 EXTRA_VAGRANTFILES= 00:00:41.549 NIC_MODEL=e1000 00:00:41.549 00:00:41.549 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:41.549 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:44.083 Bringing machine 'default' up with 'libvirt' provider... 00:00:45.462 ==> default: Creating image (snapshot of base box volume). 00:00:45.462 ==> default: Creating domain with the following settings... 00:00:45.462 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731667471_b23fa7cac688ff9c1c54 00:00:45.462 ==> default: -- Domain type: kvm 00:00:45.462 ==> default: -- Cpus: 10 00:00:45.462 ==> default: -- Feature: acpi 00:00:45.462 ==> default: -- Feature: apic 00:00:45.462 ==> default: -- Feature: pae 00:00:45.462 ==> default: -- Memory: 12288M 00:00:45.462 ==> default: -- Memory Backing: hugepages: 00:00:45.462 ==> default: -- Management MAC: 00:00:45.462 ==> default: -- Loader: 00:00:45.462 ==> default: -- Nvram: 00:00:45.462 ==> default: -- Base box: spdk/fedora39 00:00:45.462 ==> default: -- Storage pool: default 00:00:45.462 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731667471_b23fa7cac688ff9c1c54.img (20G) 00:00:45.462 ==> default: -- Volume Cache: default 00:00:45.462 ==> default: -- Kernel: 00:00:45.462 ==> default: -- Initrd: 00:00:45.462 ==> default: -- Graphics Type: vnc 00:00:45.462 ==> default: -- Graphics Port: -1 00:00:45.462 ==> default: -- Graphics IP: 127.0.0.1 00:00:45.462 ==> default: -- Graphics Password: Not defined 00:00:45.462 ==> default: -- Video Type: cirrus 00:00:45.462 ==> default: -- Video VRAM: 9216 00:00:45.462 ==> default: -- Sound Type: 00:00:45.462 ==> default: -- Keymap: en-us 00:00:45.462 ==> default: -- TPM Path: 00:00:45.462 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:45.462 ==> default: -- Command line args: 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:45.462 ==> default: -> value=-drive, 00:00:45.462 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:45.462 ==> default: -> value=-drive, 00:00:45.462 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-1-drive0, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:45.462 ==> default: -> value=-drive, 00:00:45.462 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.462 ==> default: -> value=-drive, 00:00:45.462 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.462 ==> default: -> value=-drive, 00:00:45.462 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:45.462 ==> default: -> value=-drive, 00:00:45.462 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:45.462 ==> default: -> value=-device, 00:00:45.462 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:46.030 ==> default: Creating shared folders metadata... 00:00:46.030 ==> default: Starting domain. 00:00:47.407 ==> default: Waiting for domain to get an IP address... 00:01:05.552 ==> default: Waiting for SSH to become available... 00:01:05.552 ==> default: Configuring and enabling network interfaces... 00:01:09.746 default: SSH address: 192.168.121.42:22 00:01:09.746 default: SSH username: vagrant 00:01:09.746 default: SSH auth method: private key 00:01:13.033 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:23.033 ==> default: Mounting SSHFS shared folder... 00:01:23.975 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:23.975 ==> default: Checking Mount.. 00:01:25.349 ==> default: Folder Successfully Mounted! 00:01:25.349 ==> default: Running provisioner: file... 00:01:26.722 default: ~/.gitconfig => .gitconfig 00:01:26.982 00:01:26.982 SUCCESS! 00:01:26.982 00:01:26.982 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:26.982 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:26.982 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:26.982 00:01:26.990 [Pipeline] } 00:01:27.006 [Pipeline] // stage 00:01:27.012 [Pipeline] dir 00:01:27.013 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:27.014 [Pipeline] { 00:01:27.023 [Pipeline] catchError 00:01:27.024 [Pipeline] { 00:01:27.033 [Pipeline] sh 00:01:27.307 + vagrant ssh-config --host vagrant 00:01:27.307 + sed -ne /^Host/,$p 00:01:27.307 + tee ssh_conf 00:01:30.597 Host vagrant 00:01:30.597 HostName 192.168.121.42 00:01:30.597 User vagrant 00:01:30.597 Port 22 00:01:30.597 UserKnownHostsFile /dev/null 00:01:30.597 StrictHostKeyChecking no 00:01:30.597 PasswordAuthentication no 00:01:30.597 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:30.597 IdentitiesOnly yes 00:01:30.597 LogLevel FATAL 00:01:30.598 ForwardAgent yes 00:01:30.598 ForwardX11 yes 00:01:30.598 00:01:30.613 [Pipeline] withEnv 00:01:30.616 [Pipeline] { 00:01:30.634 [Pipeline] sh 00:01:30.918 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:30.918 source /etc/os-release 00:01:30.918 [[ -e /image.version ]] && img=$(< /image.version) 00:01:30.918 # Minimal, systemd-like check. 00:01:30.918 if [[ -e /.dockerenv ]]; then 00:01:30.918 # Clear garbage from the node's name: 00:01:30.918 # agt-er_autotest_547-896 -> autotest_547-896 00:01:30.918 # $HOSTNAME is the actual container id 00:01:30.918 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:30.918 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:30.918 # We can assume this is a mount from a host where container is running, 00:01:30.918 # so fetch its hostname to easily identify the target swarm worker. 00:01:30.918 container="$(< /etc/hostname) ($agent)" 00:01:30.918 else 00:01:30.918 # Fallback 00:01:30.918 container=$agent 00:01:30.918 fi 00:01:30.918 fi 00:01:30.918 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:30.918 00:01:31.190 [Pipeline] } 00:01:31.206 [Pipeline] // withEnv 00:01:31.214 [Pipeline] setCustomBuildProperty 00:01:31.229 [Pipeline] stage 00:01:31.231 [Pipeline] { (Tests) 00:01:31.248 [Pipeline] sh 00:01:31.537 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:31.811 [Pipeline] sh 00:01:32.094 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:32.367 [Pipeline] timeout 00:01:32.368 Timeout set to expire in 50 min 00:01:32.369 [Pipeline] { 00:01:32.383 [Pipeline] sh 00:01:32.664 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:33.231 HEAD is now at f1a181ac3 test/scheduler: Drop cpufreq_high_prio[@] 00:01:33.244 [Pipeline] sh 00:01:33.526 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:33.801 [Pipeline] sh 00:01:34.111 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:34.387 [Pipeline] sh 00:01:34.669 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:34.927 ++ readlink -f spdk_repo 00:01:34.928 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:34.928 + [[ -n /home/vagrant/spdk_repo ]] 00:01:34.928 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:34.928 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:34.928 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:34.928 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:34.928 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:34.928 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:34.928 + cd /home/vagrant/spdk_repo 00:01:34.928 + source /etc/os-release 00:01:34.928 ++ NAME='Fedora Linux' 00:01:34.928 ++ VERSION='39 (Cloud Edition)' 00:01:34.928 ++ ID=fedora 00:01:34.928 ++ VERSION_ID=39 00:01:34.928 ++ VERSION_CODENAME= 00:01:34.928 ++ PLATFORM_ID=platform:f39 00:01:34.928 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:34.928 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:34.928 ++ LOGO=fedora-logo-icon 00:01:34.928 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:34.928 ++ HOME_URL=https://fedoraproject.org/ 00:01:34.928 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:34.928 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:34.928 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:34.928 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:34.928 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:34.928 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:34.928 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:34.928 ++ SUPPORT_END=2024-11-12 00:01:34.928 ++ VARIANT='Cloud Edition' 00:01:34.928 ++ VARIANT_ID=cloud 00:01:34.928 + uname -a 00:01:34.928 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:34.928 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:35.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:35.755 Hugepages 00:01:35.755 node hugesize free / total 00:01:35.755 node0 1048576kB 0 / 0 00:01:35.755 node0 2048kB 0 / 0 00:01:35.755 00:01:35.756 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:35.756 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:35.756 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:35.756 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:35.756 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:35.756 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:35.756 + rm -f /tmp/spdk-ld-path 00:01:35.756 + source autorun-spdk.conf 00:01:35.756 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.756 ++ SPDK_TEST_NVME=1 00:01:35.756 ++ SPDK_TEST_FTL=1 00:01:35.756 ++ SPDK_TEST_ISAL=1 00:01:35.756 ++ SPDK_RUN_ASAN=1 00:01:35.756 ++ SPDK_RUN_UBSAN=1 00:01:35.756 ++ SPDK_TEST_XNVME=1 00:01:35.756 ++ SPDK_TEST_NVME_FDP=1 00:01:35.756 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.756 ++ RUN_NIGHTLY=0 00:01:35.756 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:35.756 + [[ -n '' ]] 00:01:35.756 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:36.015 + for M in /var/spdk/build-*-manifest.txt 00:01:36.015 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.015 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.015 + for M in /var/spdk/build-*-manifest.txt 00:01:36.015 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.015 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.015 + for M in /var/spdk/build-*-manifest.txt 00:01:36.015 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.015 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:36.015 ++ uname 00:01:36.015 + [[ Linux == \L\i\n\u\x ]] 00:01:36.015 + sudo dmesg -T 00:01:36.015 + sudo dmesg --clear 00:01:36.015 + dmesg_pid=5242 00:01:36.015 + [[ Fedora Linux == FreeBSD ]] 00:01:36.015 + sudo dmesg -Tw 00:01:36.015 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.015 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.015 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.015 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.015 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.015 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.015 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.015 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.015 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.015 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.015 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.015 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.015 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.015 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.015 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.015 10:45:22 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:36.015 10:45:22 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.015 10:45:22 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:36.016 10:45:22 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:36.016 10:45:22 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:36.016 10:45:22 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:36.276 10:45:22 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:36.276 10:45:22 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:36.276 10:45:22 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:36.276 10:45:22 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.276 10:45:22 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.276 10:45:22 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.276 10:45:22 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.276 10:45:22 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.276 10:45:22 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.276 10:45:22 -- paths/export.sh@5 -- $ export PATH 00:01:36.276 10:45:22 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.276 10:45:22 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:36.276 10:45:22 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:36.276 10:45:22 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731667522.XXXXXX 00:01:36.276 10:45:22 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731667522.23U68r 00:01:36.276 10:45:22 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:36.276 10:45:22 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:36.276 10:45:22 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:36.276 10:45:22 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:36.276 10:45:22 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.276 10:45:22 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:36.276 10:45:22 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:36.276 10:45:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.276 10:45:22 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:36.276 10:45:22 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:36.276 10:45:22 -- pm/common@17 -- $ local monitor 00:01:36.276 10:45:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.276 10:45:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:36.276 10:45:22 -- pm/common@25 -- $ sleep 1 00:01:36.276 10:45:22 -- pm/common@21 -- $ date +%s 00:01:36.276 10:45:23 -- pm/common@21 -- $ date +%s 00:01:36.276 10:45:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731667523 00:01:36.276 10:45:23 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731667523 00:01:36.276 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731667523_collect-cpu-load.pm.log 00:01:36.276 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731667523_collect-vmstat.pm.log 00:01:37.212 10:45:24 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:37.213 10:45:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:37.213 10:45:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:37.213 10:45:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:37.213 10:45:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:37.213 Fri Nov 15 10:45:24 AM UTC 2024 00:01:37.213 10:45:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:37.213 v25.01-pre-191-gf1a181ac3 00:01:37.213 10:45:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:37.213 10:45:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:37.213 10:45:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:37.213 10:45:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:37.213 10:45:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.213 ************************************ 00:01:37.213 START TEST asan 00:01:37.213 ************************************ 00:01:37.213 using asan 00:01:37.213 10:45:24 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:37.213 00:01:37.213 real 0m0.000s 00:01:37.213 user 0m0.000s 00:01:37.213 sys 0m0.000s 00:01:37.213 10:45:24 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:37.213 ************************************ 00:01:37.213 END TEST asan 00:01:37.213 ************************************ 00:01:37.213 10:45:24 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.472 10:45:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:37.472 10:45:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:37.472 10:45:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:37.472 10:45:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:37.472 10:45:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.472 ************************************ 00:01:37.472 START TEST ubsan 00:01:37.472 ************************************ 00:01:37.472 using ubsan 00:01:37.472 10:45:24 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:37.472 00:01:37.472 real 0m0.000s 00:01:37.472 user 0m0.000s 00:01:37.472 sys 0m0.000s 00:01:37.472 10:45:24 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:37.472 ************************************ 00:01:37.472 10:45:24 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:37.472 END TEST ubsan 00:01:37.472 ************************************ 00:01:37.472 10:45:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:37.472 10:45:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:37.472 10:45:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:37.472 10:45:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:37.472 10:45:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:37.472 10:45:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:37.472 10:45:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:37.472 10:45:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:37.472 10:45:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:37.730 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:37.730 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:37.989 Using 'verbs' RDMA provider 00:01:54.255 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:12.439 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:12.439 Creating mk/config.mk...done. 00:02:12.439 Creating mk/cc.flags.mk...done. 00:02:12.439 Type 'make' to build. 00:02:12.439 10:45:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:12.439 10:45:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:12.439 10:45:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.439 10:45:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.439 ************************************ 00:02:12.439 START TEST make 00:02:12.439 ************************************ 00:02:12.439 10:45:57 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:12.439 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:12.439 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:12.439 meson setup builddir \ 00:02:12.439 -Dwith-libaio=enabled \ 00:02:12.439 -Dwith-liburing=enabled \ 00:02:12.439 -Dwith-libvfn=disabled \ 00:02:12.439 -Dwith-spdk=disabled \ 00:02:12.439 -Dexamples=false \ 00:02:12.439 -Dtests=false \ 00:02:12.439 -Dtools=false && \ 00:02:12.439 meson compile -C builddir && \ 00:02:12.439 cd -) 00:02:12.439 make[1]: Nothing to be done for 'all'. 00:02:13.376 The Meson build system 00:02:13.376 Version: 1.5.0 00:02:13.376 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:13.376 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:13.376 Build type: native build 00:02:13.376 Project name: xnvme 00:02:13.376 Project version: 0.7.5 00:02:13.376 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:13.376 C linker for the host machine: cc ld.bfd 2.40-14 00:02:13.376 Host machine cpu family: x86_64 00:02:13.376 Host machine cpu: x86_64 00:02:13.376 Message: host_machine.system: linux 00:02:13.376 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:13.376 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:13.376 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:13.376 Run-time dependency threads found: YES 00:02:13.376 Has header "setupapi.h" : NO 00:02:13.376 Has header "linux/blkzoned.h" : YES 00:02:13.376 Has header "linux/blkzoned.h" : YES (cached) 00:02:13.376 Has header "libaio.h" : YES 00:02:13.376 Library aio found: YES 00:02:13.376 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:13.376 Run-time dependency liburing found: YES 2.2 00:02:13.376 Dependency libvfn skipped: feature with-libvfn disabled 00:02:13.376 Found CMake: /usr/bin/cmake (3.27.7) 00:02:13.376 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:13.376 Subproject spdk : skipped: feature with-spdk disabled 00:02:13.376 Run-time dependency appleframeworks found: NO (tried framework) 00:02:13.376 Run-time dependency appleframeworks found: NO (tried framework) 00:02:13.376 Library rt found: YES 00:02:13.376 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:13.376 Configuring xnvme_config.h using configuration 00:02:13.376 Configuring xnvme.spec using configuration 00:02:13.376 Run-time dependency bash-completion found: YES 2.11 00:02:13.376 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:13.376 Program cp found: YES (/usr/bin/cp) 00:02:13.376 Build targets in project: 3 00:02:13.376 00:02:13.376 xnvme 0.7.5 00:02:13.376 00:02:13.376 Subprojects 00:02:13.376 spdk : NO Feature 'with-spdk' disabled 00:02:13.376 00:02:13.376 User defined options 00:02:13.376 examples : false 00:02:13.376 tests : false 00:02:13.376 tools : false 00:02:13.376 with-libaio : enabled 00:02:13.376 with-liburing: enabled 00:02:13.376 with-libvfn : disabled 00:02:13.376 with-spdk : disabled 00:02:13.376 00:02:13.376 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.943 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:13.943 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:13.943 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:13.943 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:13.943 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:13.943 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:13.943 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:13.943 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:13.943 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:13.943 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:13.943 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:13.943 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:14.202 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:14.202 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:14.202 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:14.202 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:14.202 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:14.202 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:14.202 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:14.202 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:14.202 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:14.202 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:14.202 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:14.202 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:14.202 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:14.202 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:14.202 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:14.202 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:14.202 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:14.202 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:14.202 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:14.202 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:14.202 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:14.202 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:14.202 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:14.202 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:14.202 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:14.202 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:14.202 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:14.202 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:14.202 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:14.202 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:14.202 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:14.202 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:14.202 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:14.202 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:14.202 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:14.460 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:14.460 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:14.460 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:14.460 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:14.460 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:14.460 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:14.460 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:14.460 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:14.461 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:14.461 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:14.461 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:14.461 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:14.461 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:14.461 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:14.461 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:14.461 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:14.461 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:14.461 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:14.461 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:14.461 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:14.461 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:14.461 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:14.461 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:14.719 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:14.719 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:14.719 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:14.719 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:14.978 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:14.978 [75/76] Linking static target lib/libxnvme.a 00:02:14.978 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:14.978 INFO: autodetecting backend as ninja 00:02:14.978 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:14.978 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:23.151 The Meson build system 00:02:23.151 Version: 1.5.0 00:02:23.151 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:23.151 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:23.151 Build type: native build 00:02:23.151 Program cat found: YES (/usr/bin/cat) 00:02:23.151 Project name: DPDK 00:02:23.151 Project version: 24.03.0 00:02:23.151 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.151 C linker for the host machine: cc ld.bfd 2.40-14 00:02:23.151 Host machine cpu family: x86_64 00:02:23.151 Host machine cpu: x86_64 00:02:23.151 Message: ## Building in Developer Mode ## 00:02:23.151 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.151 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:23.151 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.151 Program python3 found: YES (/usr/bin/python3) 00:02:23.151 Program cat found: YES (/usr/bin/cat) 00:02:23.151 Compiler for C supports arguments -march=native: YES 00:02:23.151 Checking for size of "void *" : 8 00:02:23.151 Checking for size of "void *" : 8 (cached) 00:02:23.151 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:23.151 Library m found: YES 00:02:23.151 Library numa found: YES 00:02:23.151 Has header "numaif.h" : YES 00:02:23.151 Library fdt found: NO 00:02:23.151 Library execinfo found: NO 00:02:23.151 Has header "execinfo.h" : YES 00:02:23.151 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.151 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.151 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.151 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.151 Run-time dependency openssl found: YES 3.1.1 00:02:23.151 Run-time dependency libpcap found: YES 1.10.4 00:02:23.151 Has header "pcap.h" with dependency libpcap: YES 00:02:23.151 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.151 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.151 Compiler for C supports arguments -Wformat: YES 00:02:23.151 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.151 Compiler for C supports arguments -Wformat-security: NO 00:02:23.151 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.151 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.151 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.151 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.151 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.151 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.151 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.151 Compiler for C supports arguments -Wundef: YES 00:02:23.151 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.151 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.151 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.151 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.151 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.151 Program objdump found: YES (/usr/bin/objdump) 00:02:23.151 Compiler for C supports arguments -mavx512f: YES 00:02:23.151 Checking if "AVX512 checking" compiles: YES 00:02:23.151 Fetching value of define "__SSE4_2__" : 1 00:02:23.151 Fetching value of define "__AES__" : 1 00:02:23.151 Fetching value of define "__AVX__" : 1 00:02:23.151 Fetching value of define "__AVX2__" : 1 00:02:23.151 Fetching value of define "__AVX512BW__" : 1 00:02:23.151 Fetching value of define "__AVX512CD__" : 1 00:02:23.151 Fetching value of define "__AVX512DQ__" : 1 00:02:23.151 Fetching value of define "__AVX512F__" : 1 00:02:23.151 Fetching value of define "__AVX512VL__" : 1 00:02:23.151 Fetching value of define "__PCLMUL__" : 1 00:02:23.151 Fetching value of define "__RDRND__" : 1 00:02:23.151 Fetching value of define "__RDSEED__" : 1 00:02:23.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.151 Fetching value of define "__znver1__" : (undefined) 00:02:23.151 Fetching value of define "__znver2__" : (undefined) 00:02:23.151 Fetching value of define "__znver3__" : (undefined) 00:02:23.151 Fetching value of define "__znver4__" : (undefined) 00:02:23.151 Library asan found: YES 00:02:23.151 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.151 Message: lib/log: Defining dependency "log" 00:02:23.151 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.151 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.151 Library rt found: YES 00:02:23.151 Checking for function "getentropy" : NO 00:02:23.151 Message: lib/eal: Defining dependency "eal" 00:02:23.151 Message: lib/ring: Defining dependency "ring" 00:02:23.151 Message: lib/rcu: Defining dependency "rcu" 00:02:23.151 Message: lib/mempool: Defining dependency "mempool" 00:02:23.151 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.151 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.151 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.151 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.151 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.151 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.151 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.151 Compiler for C supports arguments -mpclmul: YES 00:02:23.151 Compiler for C supports arguments -maes: YES 00:02:23.151 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.151 Compiler for C supports arguments -mavx512bw: YES 00:02:23.151 Compiler for C supports arguments -mavx512dq: YES 00:02:23.151 Compiler for C supports arguments -mavx512vl: YES 00:02:23.151 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.151 Compiler for C supports arguments -mavx2: YES 00:02:23.151 Compiler for C supports arguments -mavx: YES 00:02:23.151 Message: lib/net: Defining dependency "net" 00:02:23.151 Message: lib/meter: Defining dependency "meter" 00:02:23.151 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.151 Message: lib/pci: Defining dependency "pci" 00:02:23.151 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.151 Message: lib/hash: Defining dependency "hash" 00:02:23.151 Message: lib/timer: Defining dependency "timer" 00:02:23.151 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.151 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.151 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.151 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.151 Message: lib/power: Defining dependency "power" 00:02:23.151 Message: lib/reorder: Defining dependency "reorder" 00:02:23.151 Message: lib/security: Defining dependency "security" 00:02:23.151 Has header "linux/userfaultfd.h" : YES 00:02:23.151 Has header "linux/vduse.h" : YES 00:02:23.151 Message: lib/vhost: Defining dependency "vhost" 00:02:23.151 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.151 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.151 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.151 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.152 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:23.152 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:23.152 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:23.152 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:23.152 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:23.152 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:23.152 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.152 Configuring doxy-api-html.conf using configuration 00:02:23.152 Configuring doxy-api-man.conf using configuration 00:02:23.152 Program mandb found: YES (/usr/bin/mandb) 00:02:23.152 Program sphinx-build found: NO 00:02:23.152 Configuring rte_build_config.h using configuration 00:02:23.152 Message: 00:02:23.152 ================= 00:02:23.152 Applications Enabled 00:02:23.152 ================= 00:02:23.152 00:02:23.152 apps: 00:02:23.152 00:02:23.152 00:02:23.152 Message: 00:02:23.152 ================= 00:02:23.152 Libraries Enabled 00:02:23.152 ================= 00:02:23.152 00:02:23.152 libs: 00:02:23.152 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:23.152 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:23.152 cryptodev, dmadev, power, reorder, security, vhost, 00:02:23.152 00:02:23.152 Message: 00:02:23.152 =============== 00:02:23.152 Drivers Enabled 00:02:23.152 =============== 00:02:23.152 00:02:23.152 common: 00:02:23.152 00:02:23.152 bus: 00:02:23.152 pci, vdev, 00:02:23.152 mempool: 00:02:23.152 ring, 00:02:23.152 dma: 00:02:23.152 00:02:23.152 net: 00:02:23.152 00:02:23.152 crypto: 00:02:23.152 00:02:23.152 compress: 00:02:23.152 00:02:23.152 vdpa: 00:02:23.152 00:02:23.152 00:02:23.152 Message: 00:02:23.152 ================= 00:02:23.152 Content Skipped 00:02:23.152 ================= 00:02:23.152 00:02:23.152 apps: 00:02:23.152 dumpcap: explicitly disabled via build config 00:02:23.152 graph: explicitly disabled via build config 00:02:23.152 pdump: explicitly disabled via build config 00:02:23.152 proc-info: explicitly disabled via build config 00:02:23.152 test-acl: explicitly disabled via build config 00:02:23.152 test-bbdev: explicitly disabled via build config 00:02:23.152 test-cmdline: explicitly disabled via build config 00:02:23.152 test-compress-perf: explicitly disabled via build config 00:02:23.152 test-crypto-perf: explicitly disabled via build config 00:02:23.152 test-dma-perf: explicitly disabled via build config 00:02:23.152 test-eventdev: explicitly disabled via build config 00:02:23.152 test-fib: explicitly disabled via build config 00:02:23.152 test-flow-perf: explicitly disabled via build config 00:02:23.152 test-gpudev: explicitly disabled via build config 00:02:23.152 test-mldev: explicitly disabled via build config 00:02:23.152 test-pipeline: explicitly disabled via build config 00:02:23.152 test-pmd: explicitly disabled via build config 00:02:23.152 test-regex: explicitly disabled via build config 00:02:23.152 test-sad: explicitly disabled via build config 00:02:23.152 test-security-perf: explicitly disabled via build config 00:02:23.152 00:02:23.152 libs: 00:02:23.152 argparse: explicitly disabled via build config 00:02:23.152 metrics: explicitly disabled via build config 00:02:23.152 acl: explicitly disabled via build config 00:02:23.152 bbdev: explicitly disabled via build config 00:02:23.152 bitratestats: explicitly disabled via build config 00:02:23.152 bpf: explicitly disabled via build config 00:02:23.152 cfgfile: explicitly disabled via build config 00:02:23.152 distributor: explicitly disabled via build config 00:02:23.152 efd: explicitly disabled via build config 00:02:23.152 eventdev: explicitly disabled via build config 00:02:23.152 dispatcher: explicitly disabled via build config 00:02:23.152 gpudev: explicitly disabled via build config 00:02:23.152 gro: explicitly disabled via build config 00:02:23.152 gso: explicitly disabled via build config 00:02:23.152 ip_frag: explicitly disabled via build config 00:02:23.152 jobstats: explicitly disabled via build config 00:02:23.152 latencystats: explicitly disabled via build config 00:02:23.152 lpm: explicitly disabled via build config 00:02:23.152 member: explicitly disabled via build config 00:02:23.152 pcapng: explicitly disabled via build config 00:02:23.152 rawdev: explicitly disabled via build config 00:02:23.152 regexdev: explicitly disabled via build config 00:02:23.152 mldev: explicitly disabled via build config 00:02:23.152 rib: explicitly disabled via build config 00:02:23.152 sched: explicitly disabled via build config 00:02:23.152 stack: explicitly disabled via build config 00:02:23.152 ipsec: explicitly disabled via build config 00:02:23.152 pdcp: explicitly disabled via build config 00:02:23.152 fib: explicitly disabled via build config 00:02:23.152 port: explicitly disabled via build config 00:02:23.152 pdump: explicitly disabled via build config 00:02:23.152 table: explicitly disabled via build config 00:02:23.152 pipeline: explicitly disabled via build config 00:02:23.152 graph: explicitly disabled via build config 00:02:23.152 node: explicitly disabled via build config 00:02:23.152 00:02:23.152 drivers: 00:02:23.152 common/cpt: not in enabled drivers build config 00:02:23.152 common/dpaax: not in enabled drivers build config 00:02:23.152 common/iavf: not in enabled drivers build config 00:02:23.152 common/idpf: not in enabled drivers build config 00:02:23.152 common/ionic: not in enabled drivers build config 00:02:23.152 common/mvep: not in enabled drivers build config 00:02:23.152 common/octeontx: not in enabled drivers build config 00:02:23.152 bus/auxiliary: not in enabled drivers build config 00:02:23.152 bus/cdx: not in enabled drivers build config 00:02:23.152 bus/dpaa: not in enabled drivers build config 00:02:23.152 bus/fslmc: not in enabled drivers build config 00:02:23.152 bus/ifpga: not in enabled drivers build config 00:02:23.152 bus/platform: not in enabled drivers build config 00:02:23.152 bus/uacce: not in enabled drivers build config 00:02:23.152 bus/vmbus: not in enabled drivers build config 00:02:23.152 common/cnxk: not in enabled drivers build config 00:02:23.152 common/mlx5: not in enabled drivers build config 00:02:23.152 common/nfp: not in enabled drivers build config 00:02:23.152 common/nitrox: not in enabled drivers build config 00:02:23.152 common/qat: not in enabled drivers build config 00:02:23.152 common/sfc_efx: not in enabled drivers build config 00:02:23.152 mempool/bucket: not in enabled drivers build config 00:02:23.152 mempool/cnxk: not in enabled drivers build config 00:02:23.152 mempool/dpaa: not in enabled drivers build config 00:02:23.152 mempool/dpaa2: not in enabled drivers build config 00:02:23.152 mempool/octeontx: not in enabled drivers build config 00:02:23.152 mempool/stack: not in enabled drivers build config 00:02:23.152 dma/cnxk: not in enabled drivers build config 00:02:23.152 dma/dpaa: not in enabled drivers build config 00:02:23.152 dma/dpaa2: not in enabled drivers build config 00:02:23.152 dma/hisilicon: not in enabled drivers build config 00:02:23.152 dma/idxd: not in enabled drivers build config 00:02:23.152 dma/ioat: not in enabled drivers build config 00:02:23.152 dma/skeleton: not in enabled drivers build config 00:02:23.152 net/af_packet: not in enabled drivers build config 00:02:23.152 net/af_xdp: not in enabled drivers build config 00:02:23.152 net/ark: not in enabled drivers build config 00:02:23.152 net/atlantic: not in enabled drivers build config 00:02:23.152 net/avp: not in enabled drivers build config 00:02:23.152 net/axgbe: not in enabled drivers build config 00:02:23.152 net/bnx2x: not in enabled drivers build config 00:02:23.152 net/bnxt: not in enabled drivers build config 00:02:23.152 net/bonding: not in enabled drivers build config 00:02:23.152 net/cnxk: not in enabled drivers build config 00:02:23.152 net/cpfl: not in enabled drivers build config 00:02:23.152 net/cxgbe: not in enabled drivers build config 00:02:23.152 net/dpaa: not in enabled drivers build config 00:02:23.152 net/dpaa2: not in enabled drivers build config 00:02:23.152 net/e1000: not in enabled drivers build config 00:02:23.152 net/ena: not in enabled drivers build config 00:02:23.152 net/enetc: not in enabled drivers build config 00:02:23.152 net/enetfec: not in enabled drivers build config 00:02:23.152 net/enic: not in enabled drivers build config 00:02:23.152 net/failsafe: not in enabled drivers build config 00:02:23.152 net/fm10k: not in enabled drivers build config 00:02:23.152 net/gve: not in enabled drivers build config 00:02:23.152 net/hinic: not in enabled drivers build config 00:02:23.152 net/hns3: not in enabled drivers build config 00:02:23.152 net/i40e: not in enabled drivers build config 00:02:23.152 net/iavf: not in enabled drivers build config 00:02:23.152 net/ice: not in enabled drivers build config 00:02:23.152 net/idpf: not in enabled drivers build config 00:02:23.152 net/igc: not in enabled drivers build config 00:02:23.152 net/ionic: not in enabled drivers build config 00:02:23.152 net/ipn3ke: not in enabled drivers build config 00:02:23.152 net/ixgbe: not in enabled drivers build config 00:02:23.152 net/mana: not in enabled drivers build config 00:02:23.152 net/memif: not in enabled drivers build config 00:02:23.152 net/mlx4: not in enabled drivers build config 00:02:23.152 net/mlx5: not in enabled drivers build config 00:02:23.152 net/mvneta: not in enabled drivers build config 00:02:23.152 net/mvpp2: not in enabled drivers build config 00:02:23.153 net/netvsc: not in enabled drivers build config 00:02:23.153 net/nfb: not in enabled drivers build config 00:02:23.153 net/nfp: not in enabled drivers build config 00:02:23.153 net/ngbe: not in enabled drivers build config 00:02:23.153 net/null: not in enabled drivers build config 00:02:23.153 net/octeontx: not in enabled drivers build config 00:02:23.153 net/octeon_ep: not in enabled drivers build config 00:02:23.153 net/pcap: not in enabled drivers build config 00:02:23.153 net/pfe: not in enabled drivers build config 00:02:23.153 net/qede: not in enabled drivers build config 00:02:23.153 net/ring: not in enabled drivers build config 00:02:23.153 net/sfc: not in enabled drivers build config 00:02:23.153 net/softnic: not in enabled drivers build config 00:02:23.153 net/tap: not in enabled drivers build config 00:02:23.153 net/thunderx: not in enabled drivers build config 00:02:23.153 net/txgbe: not in enabled drivers build config 00:02:23.153 net/vdev_netvsc: not in enabled drivers build config 00:02:23.153 net/vhost: not in enabled drivers build config 00:02:23.153 net/virtio: not in enabled drivers build config 00:02:23.153 net/vmxnet3: not in enabled drivers build config 00:02:23.153 raw/*: missing internal dependency, "rawdev" 00:02:23.153 crypto/armv8: not in enabled drivers build config 00:02:23.153 crypto/bcmfs: not in enabled drivers build config 00:02:23.153 crypto/caam_jr: not in enabled drivers build config 00:02:23.153 crypto/ccp: not in enabled drivers build config 00:02:23.153 crypto/cnxk: not in enabled drivers build config 00:02:23.153 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.153 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.153 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.153 crypto/mlx5: not in enabled drivers build config 00:02:23.153 crypto/mvsam: not in enabled drivers build config 00:02:23.153 crypto/nitrox: not in enabled drivers build config 00:02:23.153 crypto/null: not in enabled drivers build config 00:02:23.153 crypto/octeontx: not in enabled drivers build config 00:02:23.153 crypto/openssl: not in enabled drivers build config 00:02:23.153 crypto/scheduler: not in enabled drivers build config 00:02:23.153 crypto/uadk: not in enabled drivers build config 00:02:23.153 crypto/virtio: not in enabled drivers build config 00:02:23.153 compress/isal: not in enabled drivers build config 00:02:23.153 compress/mlx5: not in enabled drivers build config 00:02:23.153 compress/nitrox: not in enabled drivers build config 00:02:23.153 compress/octeontx: not in enabled drivers build config 00:02:23.153 compress/zlib: not in enabled drivers build config 00:02:23.153 regex/*: missing internal dependency, "regexdev" 00:02:23.153 ml/*: missing internal dependency, "mldev" 00:02:23.153 vdpa/ifc: not in enabled drivers build config 00:02:23.153 vdpa/mlx5: not in enabled drivers build config 00:02:23.153 vdpa/nfp: not in enabled drivers build config 00:02:23.153 vdpa/sfc: not in enabled drivers build config 00:02:23.153 event/*: missing internal dependency, "eventdev" 00:02:23.153 baseband/*: missing internal dependency, "bbdev" 00:02:23.153 gpu/*: missing internal dependency, "gpudev" 00:02:23.153 00:02:23.153 00:02:23.153 Build targets in project: 85 00:02:23.153 00:02:23.153 DPDK 24.03.0 00:02:23.153 00:02:23.153 User defined options 00:02:23.153 buildtype : debug 00:02:23.153 default_library : shared 00:02:23.153 libdir : lib 00:02:23.153 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:23.153 b_sanitize : address 00:02:23.153 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:23.153 c_link_args : 00:02:23.153 cpu_instruction_set: native 00:02:23.153 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:23.153 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:23.153 enable_docs : false 00:02:23.153 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:23.153 enable_kmods : false 00:02:23.153 max_lcores : 128 00:02:23.153 tests : false 00:02:23.153 00:02:23.153 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.153 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:23.445 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.446 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.446 [3/268] Linking static target lib/librte_kvargs.a 00:02:23.446 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.446 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.446 [6/268] Linking static target lib/librte_log.a 00:02:23.705 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.705 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.705 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.705 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.705 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.705 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.705 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:23.705 [14/268] Linking static target lib/librte_telemetry.a 00:02:23.705 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.705 [16/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.964 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:23.964 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.223 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.223 [20/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.483 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.483 [22/268] Linking target lib/librte_log.so.24.1 00:02:24.483 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.483 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.483 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.483 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.483 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:24.483 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.742 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:24.742 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.742 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.742 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.742 [33/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.742 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:25.001 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:25.001 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:25.002 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:25.002 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:25.002 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:25.002 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:25.002 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:25.002 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:25.261 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:25.261 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:25.261 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:25.261 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:25.520 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:25.520 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:25.520 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:25.779 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.779 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:25.779 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:25.779 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:25.779 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:25.779 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:26.038 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:26.038 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:26.038 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:26.038 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:26.297 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:26.297 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:26.297 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:26.297 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:26.297 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:26.297 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:26.297 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:26.555 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:26.814 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:26.814 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:26.814 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:26.814 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:26.814 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:26.814 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:26.814 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:27.073 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:27.073 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:27.073 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:27.073 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:27.073 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:27.332 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:27.332 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:27.332 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:27.592 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:27.592 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:27.592 [85/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:27.592 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:27.592 [87/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:27.592 [88/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:27.592 [89/268] Linking static target lib/librte_ring.a 00:02:27.592 [90/268] Linking static target lib/librte_rcu.a 00:02:27.592 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:27.851 [92/268] Linking static target lib/librte_eal.a 00:02:27.851 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:27.851 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:28.110 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:28.110 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:28.110 [97/268] Linking static target lib/librte_mempool.a 00:02:28.110 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.110 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.110 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:28.368 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:28.368 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:28.368 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:28.368 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:28.369 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:28.627 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:28.627 [107/268] Linking static target lib/librte_mbuf.a 00:02:28.627 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:28.627 [109/268] Linking static target lib/librte_net.a 00:02:28.627 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:28.627 [111/268] Linking static target lib/librte_meter.a 00:02:28.886 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.886 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.886 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:29.186 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:29.186 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.186 [117/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.444 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.444 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.444 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:29.703 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.703 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:29.703 [123/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.962 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.221 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.221 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:30.221 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.221 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:30.221 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:30.221 [130/268] Linking static target lib/librte_pci.a 00:02:30.221 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.221 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.221 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.481 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:30.481 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.481 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:30.481 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:30.481 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:30.481 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:30.481 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.481 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:30.740 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:30.740 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:30.740 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.740 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.740 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.740 [147/268] Linking static target lib/librte_cmdline.a 00:02:30.740 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.000 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:31.259 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:31.259 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:31.259 [152/268] Linking static target lib/librte_timer.a 00:02:31.259 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:31.259 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:31.259 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:31.259 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:31.519 [157/268] Linking static target lib/librte_ethdev.a 00:02:31.519 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:31.778 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:31.778 [160/268] Linking static target lib/librte_compressdev.a 00:02:31.778 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:31.778 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:32.037 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.037 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.037 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.037 [166/268] Linking static target lib/librte_hash.a 00:02:32.037 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:32.037 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:32.037 [169/268] Linking static target lib/librte_dmadev.a 00:02:32.296 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.555 [171/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.555 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:32.555 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:32.555 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.814 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.814 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.814 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:32.814 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.814 [179/268] Linking static target lib/librte_cryptodev.a 00:02:33.073 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:33.073 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.073 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:33.073 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:33.332 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.332 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:33.332 [186/268] Linking static target lib/librte_power.a 00:02:33.591 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:33.591 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:33.591 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:33.591 [190/268] Linking static target lib/librte_reorder.a 00:02:33.591 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:33.591 [192/268] Linking static target lib/librte_security.a 00:02:33.849 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:34.415 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.415 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:34.673 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:34.673 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.673 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.933 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.933 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:34.933 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:34.933 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.192 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.192 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.450 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.450 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.450 [207/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.450 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.450 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.709 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.709 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.709 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.709 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.968 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.968 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:35.968 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.968 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.968 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.968 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.968 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.968 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:36.227 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.227 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.227 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.227 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.227 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:36.487 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.056 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.249 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.249 [230/268] Linking target lib/librte_eal.so.24.1 00:02:41.249 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.249 [232/268] Linking target lib/librte_ring.so.24.1 00:02:41.249 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.249 [234/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:41.249 [235/268] Linking target lib/librte_pci.so.24.1 00:02:41.249 [236/268] Linking target lib/librte_meter.so.24.1 00:02:41.249 [237/268] Linking target lib/librte_timer.so.24.1 00:02:41.249 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:41.249 [239/268] Linking static target lib/librte_vhost.a 00:02:41.249 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:41.249 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:41.249 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:41.249 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:41.249 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:41.249 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:41.249 [246/268] Linking target lib/librte_mempool.so.24.1 00:02:41.249 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:41.249 [248/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.509 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:41.509 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:41.509 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:41.509 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:41.767 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:41.767 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:41.767 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:41.767 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:41.767 [257/268] Linking target lib/librte_net.so.24.1 00:02:41.767 [258/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:41.767 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.026 [260/268] Linking target lib/librte_hash.so.24.1 00:02:42.026 [261/268] Linking target lib/librte_security.so.24.1 00:02:42.026 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.026 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:42.026 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:42.026 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:42.285 [266/268] Linking target lib/librte_power.so.24.1 00:02:43.662 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.662 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:43.662 INFO: autodetecting backend as ninja 00:02:43.662 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:05.596 CC lib/log/log_flags.o 00:03:05.596 CC lib/log/log.o 00:03:05.596 CC lib/log/log_deprecated.o 00:03:05.596 CC lib/ut/ut.o 00:03:05.596 CC lib/ut_mock/mock.o 00:03:05.596 LIB libspdk_ut.a 00:03:05.596 LIB libspdk_ut_mock.a 00:03:05.596 SO libspdk_ut.so.2.0 00:03:05.596 LIB libspdk_log.a 00:03:05.596 SO libspdk_ut_mock.so.6.0 00:03:05.596 SYMLINK libspdk_ut.so 00:03:05.596 SO libspdk_log.so.7.1 00:03:05.596 SYMLINK libspdk_ut_mock.so 00:03:05.596 SYMLINK libspdk_log.so 00:03:05.596 CXX lib/trace_parser/trace.o 00:03:05.596 CC lib/dma/dma.o 00:03:05.596 CC lib/ioat/ioat.o 00:03:05.596 CC lib/util/base64.o 00:03:05.596 CC lib/util/cpuset.o 00:03:05.596 CC lib/util/crc32.o 00:03:05.596 CC lib/util/crc32c.o 00:03:05.596 CC lib/util/bit_array.o 00:03:05.596 CC lib/util/crc16.o 00:03:05.596 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.596 CC lib/util/crc32_ieee.o 00:03:05.596 CC lib/util/crc64.o 00:03:05.596 CC lib/vfio_user/host/vfio_user.o 00:03:05.596 CC lib/util/dif.o 00:03:05.596 LIB libspdk_dma.a 00:03:05.596 CC lib/util/fd.o 00:03:05.596 CC lib/util/fd_group.o 00:03:05.596 SO libspdk_dma.so.5.0 00:03:05.596 CC lib/util/file.o 00:03:05.596 CC lib/util/hexlify.o 00:03:05.596 LIB libspdk_ioat.a 00:03:05.596 SYMLINK libspdk_dma.so 00:03:05.596 CC lib/util/iov.o 00:03:05.596 SO libspdk_ioat.so.7.0 00:03:05.596 CC lib/util/math.o 00:03:05.596 CC lib/util/net.o 00:03:05.596 LIB libspdk_vfio_user.a 00:03:05.596 SYMLINK libspdk_ioat.so 00:03:05.596 CC lib/util/pipe.o 00:03:05.596 SO libspdk_vfio_user.so.5.0 00:03:05.596 CC lib/util/strerror_tls.o 00:03:05.596 CC lib/util/string.o 00:03:05.596 SYMLINK libspdk_vfio_user.so 00:03:05.596 CC lib/util/uuid.o 00:03:05.596 CC lib/util/xor.o 00:03:05.596 CC lib/util/zipf.o 00:03:05.596 CC lib/util/md5.o 00:03:05.596 LIB libspdk_util.a 00:03:05.596 LIB libspdk_trace_parser.a 00:03:05.596 SO libspdk_util.so.10.1 00:03:05.596 SO libspdk_trace_parser.so.6.0 00:03:05.596 SYMLINK libspdk_trace_parser.so 00:03:05.596 SYMLINK libspdk_util.so 00:03:05.596 CC lib/vmd/vmd.o 00:03:05.596 CC lib/vmd/led.o 00:03:05.596 CC lib/json/json_util.o 00:03:05.596 CC lib/json/json_parse.o 00:03:05.596 CC lib/json/json_write.o 00:03:05.596 CC lib/env_dpdk/env.o 00:03:05.596 CC lib/env_dpdk/memory.o 00:03:05.596 CC lib/rdma_utils/rdma_utils.o 00:03:05.596 CC lib/idxd/idxd.o 00:03:05.596 CC lib/conf/conf.o 00:03:05.596 CC lib/idxd/idxd_user.o 00:03:05.856 LIB libspdk_conf.a 00:03:05.856 CC lib/idxd/idxd_kernel.o 00:03:05.856 LIB libspdk_rdma_utils.a 00:03:05.856 CC lib/env_dpdk/pci.o 00:03:05.856 LIB libspdk_json.a 00:03:05.856 SO libspdk_conf.so.6.0 00:03:05.856 SO libspdk_rdma_utils.so.1.0 00:03:05.856 SO libspdk_json.so.6.0 00:03:05.856 SYMLINK libspdk_conf.so 00:03:05.856 CC lib/env_dpdk/init.o 00:03:05.856 SYMLINK libspdk_rdma_utils.so 00:03:05.856 CC lib/env_dpdk/threads.o 00:03:05.856 CC lib/env_dpdk/pci_ioat.o 00:03:05.856 SYMLINK libspdk_json.so 00:03:05.856 CC lib/env_dpdk/pci_virtio.o 00:03:05.856 CC lib/env_dpdk/pci_vmd.o 00:03:06.115 CC lib/env_dpdk/pci_idxd.o 00:03:06.115 CC lib/env_dpdk/pci_event.o 00:03:06.115 CC lib/env_dpdk/sigbus_handler.o 00:03:06.115 CC lib/env_dpdk/pci_dpdk.o 00:03:06.115 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:06.115 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:06.375 LIB libspdk_idxd.a 00:03:06.375 LIB libspdk_vmd.a 00:03:06.375 SO libspdk_idxd.so.12.1 00:03:06.375 SO libspdk_vmd.so.6.0 00:03:06.375 SYMLINK libspdk_idxd.so 00:03:06.375 SYMLINK libspdk_vmd.so 00:03:06.375 CC lib/rdma_provider/common.o 00:03:06.375 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:06.375 CC lib/jsonrpc/jsonrpc_server.o 00:03:06.375 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:06.375 CC lib/jsonrpc/jsonrpc_client.o 00:03:06.635 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.635 LIB libspdk_rdma_provider.a 00:03:06.635 SO libspdk_rdma_provider.so.7.0 00:03:06.894 LIB libspdk_jsonrpc.a 00:03:06.894 SYMLINK libspdk_rdma_provider.so 00:03:06.894 SO libspdk_jsonrpc.so.6.0 00:03:06.894 SYMLINK libspdk_jsonrpc.so 00:03:07.153 LIB libspdk_env_dpdk.a 00:03:07.411 CC lib/rpc/rpc.o 00:03:07.411 SO libspdk_env_dpdk.so.15.1 00:03:07.411 SYMLINK libspdk_env_dpdk.so 00:03:07.671 LIB libspdk_rpc.a 00:03:07.671 SO libspdk_rpc.so.6.0 00:03:07.671 SYMLINK libspdk_rpc.so 00:03:08.239 CC lib/keyring/keyring_rpc.o 00:03:08.239 CC lib/keyring/keyring.o 00:03:08.239 CC lib/trace/trace.o 00:03:08.239 CC lib/trace/trace_flags.o 00:03:08.239 CC lib/trace/trace_rpc.o 00:03:08.239 CC lib/notify/notify_rpc.o 00:03:08.239 CC lib/notify/notify.o 00:03:08.239 LIB libspdk_notify.a 00:03:08.239 SO libspdk_notify.so.6.0 00:03:08.498 LIB libspdk_keyring.a 00:03:08.498 SO libspdk_keyring.so.2.0 00:03:08.498 LIB libspdk_trace.a 00:03:08.498 SYMLINK libspdk_notify.so 00:03:08.498 SO libspdk_trace.so.11.0 00:03:08.498 SYMLINK libspdk_keyring.so 00:03:08.498 SYMLINK libspdk_trace.so 00:03:09.066 CC lib/sock/sock_rpc.o 00:03:09.066 CC lib/sock/sock.o 00:03:09.066 CC lib/thread/thread.o 00:03:09.066 CC lib/thread/iobuf.o 00:03:09.323 LIB libspdk_sock.a 00:03:09.582 SO libspdk_sock.so.10.0 00:03:09.582 SYMLINK libspdk_sock.so 00:03:10.150 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.150 CC lib/nvme/nvme_ctrlr.o 00:03:10.150 CC lib/nvme/nvme_fabric.o 00:03:10.150 CC lib/nvme/nvme_ns_cmd.o 00:03:10.150 CC lib/nvme/nvme_ns.o 00:03:10.150 CC lib/nvme/nvme_pcie_common.o 00:03:10.150 CC lib/nvme/nvme_qpair.o 00:03:10.150 CC lib/nvme/nvme_pcie.o 00:03:10.150 CC lib/nvme/nvme.o 00:03:10.717 CC lib/nvme/nvme_quirks.o 00:03:10.717 LIB libspdk_thread.a 00:03:10.717 CC lib/nvme/nvme_transport.o 00:03:10.717 SO libspdk_thread.so.11.0 00:03:10.717 CC lib/nvme/nvme_discovery.o 00:03:10.717 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.717 SYMLINK libspdk_thread.so 00:03:10.717 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.975 CC lib/nvme/nvme_tcp.o 00:03:10.975 CC lib/nvme/nvme_opal.o 00:03:10.975 CC lib/nvme/nvme_io_msg.o 00:03:11.233 CC lib/nvme/nvme_poll_group.o 00:03:11.233 CC lib/nvme/nvme_zns.o 00:03:11.491 CC lib/accel/accel.o 00:03:11.491 CC lib/blob/blobstore.o 00:03:11.491 CC lib/init/json_config.o 00:03:11.491 CC lib/accel/accel_rpc.o 00:03:11.491 CC lib/accel/accel_sw.o 00:03:11.491 CC lib/virtio/virtio.o 00:03:11.750 CC lib/virtio/virtio_vhost_user.o 00:03:11.750 CC lib/virtio/virtio_vfio_user.o 00:03:11.750 CC lib/init/subsystem.o 00:03:11.750 CC lib/virtio/virtio_pci.o 00:03:12.008 CC lib/nvme/nvme_stubs.o 00:03:12.008 CC lib/init/subsystem_rpc.o 00:03:12.008 CC lib/nvme/nvme_auth.o 00:03:12.008 CC lib/nvme/nvme_cuse.o 00:03:12.008 CC lib/nvme/nvme_rdma.o 00:03:12.008 LIB libspdk_virtio.a 00:03:12.008 CC lib/init/rpc.o 00:03:12.008 SO libspdk_virtio.so.7.0 00:03:12.266 SYMLINK libspdk_virtio.so 00:03:12.266 LIB libspdk_init.a 00:03:12.266 SO libspdk_init.so.6.0 00:03:12.266 CC lib/fsdev/fsdev.o 00:03:12.266 SYMLINK libspdk_init.so 00:03:12.266 CC lib/fsdev/fsdev_io.o 00:03:12.266 CC lib/fsdev/fsdev_rpc.o 00:03:12.525 CC lib/blob/request.o 00:03:12.525 CC lib/blob/zeroes.o 00:03:12.525 LIB libspdk_accel.a 00:03:12.525 SO libspdk_accel.so.16.0 00:03:12.783 CC lib/event/app.o 00:03:12.783 CC lib/blob/blob_bs_dev.o 00:03:12.783 SYMLINK libspdk_accel.so 00:03:12.783 CC lib/event/reactor.o 00:03:12.783 CC lib/event/log_rpc.o 00:03:12.783 CC lib/event/app_rpc.o 00:03:12.783 CC lib/event/scheduler_static.o 00:03:13.041 CC lib/bdev/bdev.o 00:03:13.041 CC lib/bdev/bdev_zone.o 00:03:13.041 CC lib/bdev/bdev_rpc.o 00:03:13.041 LIB libspdk_fsdev.a 00:03:13.041 CC lib/bdev/part.o 00:03:13.041 SO libspdk_fsdev.so.2.0 00:03:13.041 SYMLINK libspdk_fsdev.so 00:03:13.299 CC lib/bdev/scsi_nvme.o 00:03:13.299 LIB libspdk_event.a 00:03:13.299 SO libspdk_event.so.14.0 00:03:13.299 SYMLINK libspdk_event.so 00:03:13.299 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:13.557 LIB libspdk_nvme.a 00:03:13.815 SO libspdk_nvme.so.15.0 00:03:14.074 SYMLINK libspdk_nvme.so 00:03:14.074 LIB libspdk_fuse_dispatcher.a 00:03:14.332 SO libspdk_fuse_dispatcher.so.1.0 00:03:14.332 SYMLINK libspdk_fuse_dispatcher.so 00:03:15.323 LIB libspdk_blob.a 00:03:15.323 SO libspdk_blob.so.11.0 00:03:15.323 SYMLINK libspdk_blob.so 00:03:15.891 CC lib/lvol/lvol.o 00:03:15.891 CC lib/blobfs/blobfs.o 00:03:15.891 CC lib/blobfs/tree.o 00:03:16.150 LIB libspdk_bdev.a 00:03:16.408 SO libspdk_bdev.so.17.0 00:03:16.408 SYMLINK libspdk_bdev.so 00:03:16.667 LIB libspdk_blobfs.a 00:03:16.667 SO libspdk_blobfs.so.10.0 00:03:16.667 CC lib/scsi/lun.o 00:03:16.667 CC lib/scsi/dev.o 00:03:16.667 CC lib/scsi/port.o 00:03:16.667 CC lib/scsi/scsi.o 00:03:16.667 CC lib/ublk/ublk.o 00:03:16.667 CC lib/nbd/nbd.o 00:03:16.667 CC lib/nvmf/ctrlr.o 00:03:16.667 CC lib/ftl/ftl_core.o 00:03:16.667 SYMLINK libspdk_blobfs.so 00:03:16.667 CC lib/ftl/ftl_init.o 00:03:16.926 LIB libspdk_lvol.a 00:03:16.926 SO libspdk_lvol.so.10.0 00:03:16.926 CC lib/ftl/ftl_layout.o 00:03:16.926 CC lib/ftl/ftl_debug.o 00:03:16.926 SYMLINK libspdk_lvol.so 00:03:16.926 CC lib/nbd/nbd_rpc.o 00:03:16.926 CC lib/ublk/ublk_rpc.o 00:03:16.926 CC lib/ftl/ftl_io.o 00:03:16.926 CC lib/scsi/scsi_bdev.o 00:03:17.185 CC lib/ftl/ftl_sb.o 00:03:17.185 CC lib/ftl/ftl_l2p.o 00:03:17.185 CC lib/ftl/ftl_l2p_flat.o 00:03:17.185 CC lib/ftl/ftl_nv_cache.o 00:03:17.185 LIB libspdk_nbd.a 00:03:17.185 SO libspdk_nbd.so.7.0 00:03:17.185 CC lib/nvmf/ctrlr_discovery.o 00:03:17.185 SYMLINK libspdk_nbd.so 00:03:17.185 CC lib/nvmf/ctrlr_bdev.o 00:03:17.444 CC lib/nvmf/subsystem.o 00:03:17.444 CC lib/nvmf/nvmf.o 00:03:17.444 CC lib/nvmf/nvmf_rpc.o 00:03:17.444 CC lib/nvmf/transport.o 00:03:17.444 LIB libspdk_ublk.a 00:03:17.444 SO libspdk_ublk.so.3.0 00:03:17.444 SYMLINK libspdk_ublk.so 00:03:17.444 CC lib/scsi/scsi_pr.o 00:03:17.703 CC lib/nvmf/tcp.o 00:03:17.961 CC lib/ftl/ftl_band.o 00:03:17.961 CC lib/scsi/scsi_rpc.o 00:03:18.217 CC lib/nvmf/stubs.o 00:03:18.217 CC lib/scsi/task.o 00:03:18.217 CC lib/nvmf/mdns_server.o 00:03:18.217 CC lib/nvmf/rdma.o 00:03:18.217 LIB libspdk_scsi.a 00:03:18.475 CC lib/nvmf/auth.o 00:03:18.475 CC lib/ftl/ftl_band_ops.o 00:03:18.475 SO libspdk_scsi.so.9.0 00:03:18.475 CC lib/ftl/ftl_writer.o 00:03:18.475 SYMLINK libspdk_scsi.so 00:03:18.475 CC lib/ftl/ftl_rq.o 00:03:18.733 CC lib/ftl/ftl_reloc.o 00:03:18.733 CC lib/ftl/ftl_l2p_cache.o 00:03:18.733 CC lib/ftl/ftl_p2l.o 00:03:18.733 CC lib/iscsi/conn.o 00:03:18.733 CC lib/ftl/ftl_p2l_log.o 00:03:18.991 CC lib/vhost/vhost.o 00:03:18.991 CC lib/iscsi/init_grp.o 00:03:19.250 CC lib/iscsi/iscsi.o 00:03:19.250 CC lib/vhost/vhost_rpc.o 00:03:19.250 CC lib/ftl/mngt/ftl_mngt.o 00:03:19.250 CC lib/iscsi/param.o 00:03:19.250 CC lib/iscsi/portal_grp.o 00:03:19.250 CC lib/vhost/vhost_scsi.o 00:03:19.509 CC lib/iscsi/tgt_node.o 00:03:19.509 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:19.509 CC lib/iscsi/iscsi_subsystem.o 00:03:19.509 CC lib/iscsi/iscsi_rpc.o 00:03:19.509 CC lib/iscsi/task.o 00:03:19.768 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:19.768 CC lib/vhost/vhost_blk.o 00:03:19.768 CC lib/vhost/rte_vhost_user.o 00:03:19.768 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:20.061 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:20.061 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:20.061 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:20.061 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:20.061 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:20.327 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:20.327 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:20.327 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:20.327 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:20.327 CC lib/ftl/utils/ftl_conf.o 00:03:20.327 CC lib/ftl/utils/ftl_md.o 00:03:20.327 CC lib/ftl/utils/ftl_mempool.o 00:03:20.327 CC lib/ftl/utils/ftl_bitmap.o 00:03:20.327 CC lib/ftl/utils/ftl_property.o 00:03:20.327 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:20.586 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:20.586 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:20.586 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:20.586 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:20.586 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:20.586 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:20.586 LIB libspdk_iscsi.a 00:03:20.844 SO libspdk_iscsi.so.8.0 00:03:20.844 LIB libspdk_vhost.a 00:03:20.844 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:20.844 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:20.844 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:20.844 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:20.844 LIB libspdk_nvmf.a 00:03:20.844 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:20.844 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:20.844 CC lib/ftl/base/ftl_base_dev.o 00:03:20.844 SO libspdk_vhost.so.8.0 00:03:20.844 SYMLINK libspdk_iscsi.so 00:03:21.103 CC lib/ftl/base/ftl_base_bdev.o 00:03:21.103 CC lib/ftl/ftl_trace.o 00:03:21.103 SYMLINK libspdk_vhost.so 00:03:21.103 SO libspdk_nvmf.so.20.0 00:03:21.362 LIB libspdk_ftl.a 00:03:21.362 SYMLINK libspdk_nvmf.so 00:03:21.621 SO libspdk_ftl.so.9.0 00:03:21.880 SYMLINK libspdk_ftl.so 00:03:22.447 CC module/env_dpdk/env_dpdk_rpc.o 00:03:22.447 CC module/keyring/linux/keyring.o 00:03:22.447 CC module/accel/dsa/accel_dsa.o 00:03:22.447 CC module/blob/bdev/blob_bdev.o 00:03:22.448 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:22.448 CC module/accel/error/accel_error.o 00:03:22.448 CC module/accel/ioat/accel_ioat.o 00:03:22.448 CC module/sock/posix/posix.o 00:03:22.448 CC module/fsdev/aio/fsdev_aio.o 00:03:22.448 CC module/keyring/file/keyring.o 00:03:22.448 LIB libspdk_env_dpdk_rpc.a 00:03:22.448 SO libspdk_env_dpdk_rpc.so.6.0 00:03:22.448 SYMLINK libspdk_env_dpdk_rpc.so 00:03:22.448 CC module/keyring/file/keyring_rpc.o 00:03:22.707 CC module/accel/error/accel_error_rpc.o 00:03:22.707 CC module/accel/ioat/accel_ioat_rpc.o 00:03:22.707 CC module/keyring/linux/keyring_rpc.o 00:03:22.707 LIB libspdk_scheduler_dynamic.a 00:03:22.707 SO libspdk_scheduler_dynamic.so.4.0 00:03:22.707 LIB libspdk_keyring_file.a 00:03:22.707 SO libspdk_keyring_file.so.2.0 00:03:22.707 CC module/accel/dsa/accel_dsa_rpc.o 00:03:22.707 LIB libspdk_accel_error.a 00:03:22.707 SYMLINK libspdk_scheduler_dynamic.so 00:03:22.707 LIB libspdk_blob_bdev.a 00:03:22.707 SO libspdk_accel_error.so.2.0 00:03:22.707 LIB libspdk_keyring_linux.a 00:03:22.707 LIB libspdk_accel_ioat.a 00:03:22.707 SYMLINK libspdk_keyring_file.so 00:03:22.707 SO libspdk_blob_bdev.so.11.0 00:03:22.707 SO libspdk_keyring_linux.so.1.0 00:03:22.707 SO libspdk_accel_ioat.so.6.0 00:03:22.707 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:22.967 SYMLINK libspdk_accel_error.so 00:03:22.967 SYMLINK libspdk_blob_bdev.so 00:03:22.967 SYMLINK libspdk_keyring_linux.so 00:03:22.967 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:22.967 LIB libspdk_accel_dsa.a 00:03:22.967 CC module/fsdev/aio/linux_aio_mgr.o 00:03:22.967 SO libspdk_accel_dsa.so.5.0 00:03:22.967 SYMLINK libspdk_accel_ioat.so 00:03:22.967 CC module/scheduler/gscheduler/gscheduler.o 00:03:22.967 SYMLINK libspdk_accel_dsa.so 00:03:22.967 LIB libspdk_scheduler_dpdk_governor.a 00:03:22.967 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:22.967 CC module/accel/iaa/accel_iaa.o 00:03:23.226 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:23.226 CC module/accel/iaa/accel_iaa_rpc.o 00:03:23.226 LIB libspdk_scheduler_gscheduler.a 00:03:23.226 CC module/bdev/error/vbdev_error.o 00:03:23.226 CC module/bdev/delay/vbdev_delay.o 00:03:23.226 SO libspdk_scheduler_gscheduler.so.4.0 00:03:23.226 LIB libspdk_fsdev_aio.a 00:03:23.226 CC module/bdev/gpt/gpt.o 00:03:23.226 CC module/blobfs/bdev/blobfs_bdev.o 00:03:23.226 SO libspdk_fsdev_aio.so.1.0 00:03:23.226 SYMLINK libspdk_scheduler_gscheduler.so 00:03:23.226 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:23.226 LIB libspdk_sock_posix.a 00:03:23.226 LIB libspdk_accel_iaa.a 00:03:23.226 CC module/bdev/lvol/vbdev_lvol.o 00:03:23.226 SYMLINK libspdk_fsdev_aio.so 00:03:23.485 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:23.485 SO libspdk_accel_iaa.so.3.0 00:03:23.485 SO libspdk_sock_posix.so.6.0 00:03:23.485 CC module/bdev/gpt/vbdev_gpt.o 00:03:23.485 SYMLINK libspdk_accel_iaa.so 00:03:23.485 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:23.485 SYMLINK libspdk_sock_posix.so 00:03:23.485 CC module/bdev/error/vbdev_error_rpc.o 00:03:23.485 CC module/bdev/malloc/bdev_malloc.o 00:03:23.745 LIB libspdk_blobfs_bdev.a 00:03:23.745 CC module/bdev/nvme/bdev_nvme.o 00:03:23.745 LIB libspdk_bdev_delay.a 00:03:23.745 CC module/bdev/null/bdev_null.o 00:03:23.745 LIB libspdk_bdev_error.a 00:03:23.745 SO libspdk_blobfs_bdev.so.6.0 00:03:23.745 SO libspdk_bdev_delay.so.6.0 00:03:23.745 CC module/bdev/passthru/vbdev_passthru.o 00:03:23.745 SO libspdk_bdev_error.so.6.0 00:03:23.745 LIB libspdk_bdev_gpt.a 00:03:23.745 SYMLINK libspdk_blobfs_bdev.so 00:03:23.745 SYMLINK libspdk_bdev_delay.so 00:03:23.745 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:23.745 SO libspdk_bdev_gpt.so.6.0 00:03:23.745 CC module/bdev/null/bdev_null_rpc.o 00:03:23.745 SYMLINK libspdk_bdev_error.so 00:03:23.745 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:23.745 CC module/bdev/nvme/nvme_rpc.o 00:03:23.745 SYMLINK libspdk_bdev_gpt.so 00:03:23.745 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:24.004 CC module/bdev/nvme/bdev_mdns_client.o 00:03:24.004 CC module/bdev/nvme/vbdev_opal.o 00:03:24.004 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:24.004 LIB libspdk_bdev_null.a 00:03:24.004 LIB libspdk_bdev_lvol.a 00:03:24.004 SO libspdk_bdev_null.so.6.0 00:03:24.004 SO libspdk_bdev_lvol.so.6.0 00:03:24.004 LIB libspdk_bdev_malloc.a 00:03:24.004 SO libspdk_bdev_malloc.so.6.0 00:03:24.004 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:24.004 LIB libspdk_bdev_passthru.a 00:03:24.004 SYMLINK libspdk_bdev_null.so 00:03:24.005 SO libspdk_bdev_passthru.so.6.0 00:03:24.005 SYMLINK libspdk_bdev_lvol.so 00:03:24.005 SYMLINK libspdk_bdev_malloc.so 00:03:24.263 SYMLINK libspdk_bdev_passthru.so 00:03:24.263 CC module/bdev/raid/bdev_raid.o 00:03:24.263 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:24.263 CC module/bdev/split/vbdev_split.o 00:03:24.263 CC module/bdev/xnvme/bdev_xnvme.o 00:03:24.263 CC module/bdev/ftl/bdev_ftl.o 00:03:24.521 CC module/bdev/aio/bdev_aio.o 00:03:24.521 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:24.521 CC module/bdev/iscsi/bdev_iscsi.o 00:03:24.521 CC module/bdev/split/vbdev_split_rpc.o 00:03:24.522 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:24.779 LIB libspdk_bdev_split.a 00:03:24.780 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:24.780 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:24.780 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:24.780 SO libspdk_bdev_split.so.6.0 00:03:24.780 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:24.780 SYMLINK libspdk_bdev_split.so 00:03:24.780 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:24.780 CC module/bdev/aio/bdev_aio_rpc.o 00:03:24.780 LIB libspdk_bdev_zone_block.a 00:03:24.780 LIB libspdk_bdev_iscsi.a 00:03:24.780 LIB libspdk_bdev_xnvme.a 00:03:24.780 SO libspdk_bdev_zone_block.so.6.0 00:03:24.780 SO libspdk_bdev_iscsi.so.6.0 00:03:25.039 SO libspdk_bdev_xnvme.so.3.0 00:03:25.039 LIB libspdk_bdev_ftl.a 00:03:25.039 LIB libspdk_bdev_aio.a 00:03:25.039 SO libspdk_bdev_ftl.so.6.0 00:03:25.039 SYMLINK libspdk_bdev_iscsi.so 00:03:25.039 SYMLINK libspdk_bdev_zone_block.so 00:03:25.039 SO libspdk_bdev_aio.so.6.0 00:03:25.039 CC module/bdev/raid/bdev_raid_rpc.o 00:03:25.039 CC module/bdev/raid/bdev_raid_sb.o 00:03:25.039 CC module/bdev/raid/raid0.o 00:03:25.039 SYMLINK libspdk_bdev_xnvme.so 00:03:25.039 CC module/bdev/raid/raid1.o 00:03:25.039 CC module/bdev/raid/concat.o 00:03:25.039 SYMLINK libspdk_bdev_ftl.so 00:03:25.039 SYMLINK libspdk_bdev_aio.so 00:03:25.039 LIB libspdk_bdev_virtio.a 00:03:25.039 SO libspdk_bdev_virtio.so.6.0 00:03:25.299 SYMLINK libspdk_bdev_virtio.so 00:03:25.559 LIB libspdk_bdev_raid.a 00:03:25.559 SO libspdk_bdev_raid.so.6.0 00:03:25.817 SYMLINK libspdk_bdev_raid.so 00:03:26.761 LIB libspdk_bdev_nvme.a 00:03:26.761 SO libspdk_bdev_nvme.so.7.1 00:03:27.020 SYMLINK libspdk_bdev_nvme.so 00:03:27.587 CC module/event/subsystems/sock/sock.o 00:03:27.587 CC module/event/subsystems/keyring/keyring.o 00:03:27.587 CC module/event/subsystems/fsdev/fsdev.o 00:03:27.587 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:27.587 CC module/event/subsystems/vmd/vmd.o 00:03:27.587 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:27.587 CC module/event/subsystems/scheduler/scheduler.o 00:03:27.587 CC module/event/subsystems/iobuf/iobuf.o 00:03:27.587 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:27.845 LIB libspdk_event_keyring.a 00:03:27.845 LIB libspdk_event_vhost_blk.a 00:03:27.845 LIB libspdk_event_sock.a 00:03:27.845 LIB libspdk_event_scheduler.a 00:03:27.845 LIB libspdk_event_fsdev.a 00:03:27.845 LIB libspdk_event_vmd.a 00:03:27.845 SO libspdk_event_keyring.so.1.0 00:03:27.845 SO libspdk_event_sock.so.5.0 00:03:27.845 SO libspdk_event_scheduler.so.4.0 00:03:27.845 SO libspdk_event_vhost_blk.so.3.0 00:03:27.845 SO libspdk_event_fsdev.so.1.0 00:03:27.845 SO libspdk_event_vmd.so.6.0 00:03:27.845 LIB libspdk_event_iobuf.a 00:03:27.845 SYMLINK libspdk_event_keyring.so 00:03:27.845 SYMLINK libspdk_event_scheduler.so 00:03:27.845 SYMLINK libspdk_event_sock.so 00:03:27.845 SYMLINK libspdk_event_fsdev.so 00:03:27.845 SO libspdk_event_iobuf.so.3.0 00:03:27.845 SYMLINK libspdk_event_vhost_blk.so 00:03:27.845 SYMLINK libspdk_event_vmd.so 00:03:27.845 SYMLINK libspdk_event_iobuf.so 00:03:28.442 CC module/event/subsystems/accel/accel.o 00:03:28.442 LIB libspdk_event_accel.a 00:03:28.442 SO libspdk_event_accel.so.6.0 00:03:28.701 SYMLINK libspdk_event_accel.so 00:03:28.960 CC module/event/subsystems/bdev/bdev.o 00:03:29.219 LIB libspdk_event_bdev.a 00:03:29.219 SO libspdk_event_bdev.so.6.0 00:03:29.219 SYMLINK libspdk_event_bdev.so 00:03:29.835 CC module/event/subsystems/scsi/scsi.o 00:03:29.835 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:29.835 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:29.835 CC module/event/subsystems/ublk/ublk.o 00:03:29.835 CC module/event/subsystems/nbd/nbd.o 00:03:29.835 LIB libspdk_event_ublk.a 00:03:29.835 LIB libspdk_event_scsi.a 00:03:29.835 LIB libspdk_event_nbd.a 00:03:29.835 SO libspdk_event_ublk.so.3.0 00:03:29.835 SO libspdk_event_nbd.so.6.0 00:03:29.835 SO libspdk_event_scsi.so.6.0 00:03:30.115 SYMLINK libspdk_event_ublk.so 00:03:30.115 SYMLINK libspdk_event_nbd.so 00:03:30.115 SYMLINK libspdk_event_scsi.so 00:03:30.115 LIB libspdk_event_nvmf.a 00:03:30.115 SO libspdk_event_nvmf.so.6.0 00:03:30.115 SYMLINK libspdk_event_nvmf.so 00:03:30.374 CC module/event/subsystems/iscsi/iscsi.o 00:03:30.374 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:30.374 LIB libspdk_event_iscsi.a 00:03:30.374 LIB libspdk_event_vhost_scsi.a 00:03:30.632 SO libspdk_event_iscsi.so.6.0 00:03:30.632 SO libspdk_event_vhost_scsi.so.3.0 00:03:30.632 SYMLINK libspdk_event_vhost_scsi.so 00:03:30.632 SYMLINK libspdk_event_iscsi.so 00:03:30.891 SO libspdk.so.6.0 00:03:30.891 SYMLINK libspdk.so 00:03:31.150 CC app/trace_record/trace_record.o 00:03:31.150 CC app/spdk_nvme_identify/identify.o 00:03:31.150 CC app/spdk_nvme_perf/perf.o 00:03:31.150 CXX app/trace/trace.o 00:03:31.150 CC app/spdk_lspci/spdk_lspci.o 00:03:31.150 CC app/iscsi_tgt/iscsi_tgt.o 00:03:31.150 CC app/nvmf_tgt/nvmf_main.o 00:03:31.150 CC app/spdk_tgt/spdk_tgt.o 00:03:31.150 CC examples/util/zipf/zipf.o 00:03:31.150 CC test/thread/poller_perf/poller_perf.o 00:03:31.409 LINK spdk_lspci 00:03:31.409 LINK nvmf_tgt 00:03:31.409 LINK spdk_trace_record 00:03:31.409 LINK zipf 00:03:31.409 LINK poller_perf 00:03:31.409 LINK iscsi_tgt 00:03:31.409 LINK spdk_tgt 00:03:31.669 LINK spdk_trace 00:03:31.669 CC app/spdk_nvme_discover/discovery_aer.o 00:03:31.669 CC examples/ioat/perf/perf.o 00:03:31.669 CC app/spdk_top/spdk_top.o 00:03:31.928 CC app/spdk_dd/spdk_dd.o 00:03:31.928 TEST_HEADER include/spdk/accel.h 00:03:31.928 TEST_HEADER include/spdk/accel_module.h 00:03:31.928 TEST_HEADER include/spdk/assert.h 00:03:31.928 TEST_HEADER include/spdk/barrier.h 00:03:31.928 TEST_HEADER include/spdk/base64.h 00:03:31.928 CC test/dma/test_dma/test_dma.o 00:03:31.928 TEST_HEADER include/spdk/bdev.h 00:03:31.928 TEST_HEADER include/spdk/bdev_module.h 00:03:31.928 TEST_HEADER include/spdk/bdev_zone.h 00:03:31.928 TEST_HEADER include/spdk/bit_array.h 00:03:31.928 CC test/app/bdev_svc/bdev_svc.o 00:03:31.928 TEST_HEADER include/spdk/bit_pool.h 00:03:31.928 TEST_HEADER include/spdk/blob_bdev.h 00:03:31.928 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:31.928 TEST_HEADER include/spdk/blobfs.h 00:03:31.928 TEST_HEADER include/spdk/blob.h 00:03:31.928 TEST_HEADER include/spdk/conf.h 00:03:31.928 TEST_HEADER include/spdk/config.h 00:03:31.928 TEST_HEADER include/spdk/cpuset.h 00:03:31.928 TEST_HEADER include/spdk/crc16.h 00:03:31.928 TEST_HEADER include/spdk/crc32.h 00:03:31.928 CC app/fio/nvme/fio_plugin.o 00:03:31.928 TEST_HEADER include/spdk/crc64.h 00:03:31.928 TEST_HEADER include/spdk/dif.h 00:03:31.928 TEST_HEADER include/spdk/dma.h 00:03:31.928 TEST_HEADER include/spdk/endian.h 00:03:31.928 TEST_HEADER include/spdk/env_dpdk.h 00:03:31.928 LINK spdk_nvme_discover 00:03:31.928 TEST_HEADER include/spdk/env.h 00:03:31.928 TEST_HEADER include/spdk/event.h 00:03:31.928 TEST_HEADER include/spdk/fd_group.h 00:03:31.928 TEST_HEADER include/spdk/fd.h 00:03:31.928 TEST_HEADER include/spdk/file.h 00:03:31.928 TEST_HEADER include/spdk/fsdev.h 00:03:31.928 TEST_HEADER include/spdk/fsdev_module.h 00:03:31.928 TEST_HEADER include/spdk/ftl.h 00:03:31.928 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:31.928 TEST_HEADER include/spdk/gpt_spec.h 00:03:31.928 TEST_HEADER include/spdk/hexlify.h 00:03:31.928 TEST_HEADER include/spdk/histogram_data.h 00:03:31.928 TEST_HEADER include/spdk/idxd.h 00:03:31.928 TEST_HEADER include/spdk/idxd_spec.h 00:03:31.928 TEST_HEADER include/spdk/init.h 00:03:31.928 TEST_HEADER include/spdk/ioat.h 00:03:31.928 TEST_HEADER include/spdk/ioat_spec.h 00:03:31.928 TEST_HEADER include/spdk/iscsi_spec.h 00:03:31.928 TEST_HEADER include/spdk/json.h 00:03:31.928 TEST_HEADER include/spdk/jsonrpc.h 00:03:31.928 TEST_HEADER include/spdk/keyring.h 00:03:31.928 TEST_HEADER include/spdk/keyring_module.h 00:03:31.928 TEST_HEADER include/spdk/likely.h 00:03:31.928 TEST_HEADER include/spdk/log.h 00:03:31.928 TEST_HEADER include/spdk/lvol.h 00:03:31.928 TEST_HEADER include/spdk/md5.h 00:03:31.928 TEST_HEADER include/spdk/memory.h 00:03:31.928 TEST_HEADER include/spdk/mmio.h 00:03:31.928 TEST_HEADER include/spdk/nbd.h 00:03:31.928 TEST_HEADER include/spdk/net.h 00:03:31.928 TEST_HEADER include/spdk/notify.h 00:03:31.928 TEST_HEADER include/spdk/nvme.h 00:03:31.928 LINK ioat_perf 00:03:31.928 TEST_HEADER include/spdk/nvme_intel.h 00:03:31.928 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:31.928 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:31.928 TEST_HEADER include/spdk/nvme_spec.h 00:03:31.928 TEST_HEADER include/spdk/nvme_zns.h 00:03:31.928 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:31.928 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:31.928 TEST_HEADER include/spdk/nvmf.h 00:03:31.928 TEST_HEADER include/spdk/nvmf_spec.h 00:03:31.928 TEST_HEADER include/spdk/nvmf_transport.h 00:03:31.928 TEST_HEADER include/spdk/opal.h 00:03:31.928 TEST_HEADER include/spdk/opal_spec.h 00:03:31.928 TEST_HEADER include/spdk/pci_ids.h 00:03:31.928 TEST_HEADER include/spdk/pipe.h 00:03:31.928 TEST_HEADER include/spdk/queue.h 00:03:31.928 TEST_HEADER include/spdk/reduce.h 00:03:31.928 TEST_HEADER include/spdk/rpc.h 00:03:31.928 TEST_HEADER include/spdk/scheduler.h 00:03:31.928 TEST_HEADER include/spdk/scsi.h 00:03:31.928 TEST_HEADER include/spdk/scsi_spec.h 00:03:31.928 TEST_HEADER include/spdk/sock.h 00:03:31.928 TEST_HEADER include/spdk/stdinc.h 00:03:31.928 TEST_HEADER include/spdk/string.h 00:03:31.928 TEST_HEADER include/spdk/thread.h 00:03:31.928 TEST_HEADER include/spdk/trace.h 00:03:31.928 TEST_HEADER include/spdk/trace_parser.h 00:03:32.187 TEST_HEADER include/spdk/tree.h 00:03:32.187 TEST_HEADER include/spdk/ublk.h 00:03:32.187 TEST_HEADER include/spdk/util.h 00:03:32.187 TEST_HEADER include/spdk/uuid.h 00:03:32.187 TEST_HEADER include/spdk/version.h 00:03:32.187 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:32.187 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:32.187 TEST_HEADER include/spdk/vhost.h 00:03:32.187 TEST_HEADER include/spdk/vmd.h 00:03:32.187 TEST_HEADER include/spdk/xor.h 00:03:32.187 TEST_HEADER include/spdk/zipf.h 00:03:32.187 CXX test/cpp_headers/accel.o 00:03:32.187 CXX test/cpp_headers/accel_module.o 00:03:32.187 LINK bdev_svc 00:03:32.187 LINK spdk_nvme_perf 00:03:32.187 LINK spdk_dd 00:03:32.187 LINK spdk_nvme_identify 00:03:32.187 CC examples/ioat/verify/verify.o 00:03:32.447 CXX test/cpp_headers/assert.o 00:03:32.447 CC app/vhost/vhost.o 00:03:32.447 CXX test/cpp_headers/barrier.o 00:03:32.447 CC test/app/histogram_perf/histogram_perf.o 00:03:32.447 LINK verify 00:03:32.447 CC app/fio/bdev/fio_plugin.o 00:03:32.447 LINK test_dma 00:03:32.447 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:32.706 LINK spdk_nvme 00:03:32.706 CXX test/cpp_headers/base64.o 00:03:32.706 CC test/app/jsoncat/jsoncat.o 00:03:32.706 LINK vhost 00:03:32.706 LINK histogram_perf 00:03:32.966 CC test/app/stub/stub.o 00:03:32.966 CXX test/cpp_headers/bdev.o 00:03:32.966 LINK jsoncat 00:03:32.966 CXX test/cpp_headers/bdev_module.o 00:03:32.966 CXX test/cpp_headers/bdev_zone.o 00:03:32.966 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:32.966 LINK spdk_top 00:03:32.966 LINK nvme_fuzz 00:03:32.966 LINK stub 00:03:33.224 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.224 CXX test/cpp_headers/bit_array.o 00:03:33.224 LINK spdk_bdev 00:03:33.224 CC test/env/vtophys/vtophys.o 00:03:33.224 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:33.224 LINK interrupt_tgt 00:03:33.224 CC test/env/memory/memory_ut.o 00:03:33.224 CXX test/cpp_headers/bit_pool.o 00:03:33.224 LINK vtophys 00:03:33.224 CC test/env/pci/pci_ut.o 00:03:33.482 LINK env_dpdk_post_init 00:03:33.482 CC test/event/event_perf/event_perf.o 00:03:33.482 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:33.482 CC test/nvme/aer/aer.o 00:03:33.482 CXX test/cpp_headers/blob_bdev.o 00:03:33.482 LINK event_perf 00:03:33.482 CC test/rpc_client/rpc_client_test.o 00:03:33.482 CC examples/thread/thread/thread_ex.o 00:03:33.740 LINK mem_callbacks 00:03:33.740 CXX test/cpp_headers/blobfs_bdev.o 00:03:33.740 CC test/accel/dif/dif.o 00:03:33.740 LINK aer 00:03:33.740 LINK rpc_client_test 00:03:33.740 CXX test/cpp_headers/blobfs.o 00:03:33.740 LINK pci_ut 00:03:33.740 CC test/event/reactor/reactor.o 00:03:34.000 LINK thread 00:03:34.000 CXX test/cpp_headers/blob.o 00:03:34.000 LINK reactor 00:03:34.260 CC test/nvme/reset/reset.o 00:03:34.260 CXX test/cpp_headers/conf.o 00:03:34.260 CC test/blobfs/mkfs/mkfs.o 00:03:34.260 CC test/event/reactor_perf/reactor_perf.o 00:03:34.260 CC test/event/app_repeat/app_repeat.o 00:03:34.260 CC test/lvol/esnap/esnap.o 00:03:34.260 CXX test/cpp_headers/config.o 00:03:34.518 CXX test/cpp_headers/cpuset.o 00:03:34.518 CC examples/sock/hello_world/hello_sock.o 00:03:34.518 LINK reset 00:03:34.518 LINK mkfs 00:03:34.518 LINK reactor_perf 00:03:34.518 LINK app_repeat 00:03:34.518 CXX test/cpp_headers/crc16.o 00:03:34.518 LINK dif 00:03:34.518 LINK memory_ut 00:03:34.776 LINK hello_sock 00:03:34.776 CC test/nvme/sgl/sgl.o 00:03:34.776 CXX test/cpp_headers/crc32.o 00:03:34.776 CC test/nvme/e2edp/nvme_dp.o 00:03:34.777 CC test/nvme/overhead/overhead.o 00:03:34.777 CC test/event/scheduler/scheduler.o 00:03:35.034 CXX test/cpp_headers/crc64.o 00:03:35.034 CXX test/cpp_headers/dif.o 00:03:35.034 CC test/nvme/err_injection/err_injection.o 00:03:35.034 LINK sgl 00:03:35.034 CC examples/vmd/lsvmd/lsvmd.o 00:03:35.034 LINK nvme_dp 00:03:35.034 CXX test/cpp_headers/dma.o 00:03:35.035 LINK scheduler 00:03:35.293 LINK overhead 00:03:35.293 LINK err_injection 00:03:35.293 CXX test/cpp_headers/endian.o 00:03:35.293 CXX test/cpp_headers/env_dpdk.o 00:03:35.293 LINK lsvmd 00:03:35.293 CC test/bdev/bdevio/bdevio.o 00:03:35.293 LINK iscsi_fuzz 00:03:35.293 CC examples/vmd/led/led.o 00:03:35.293 CXX test/cpp_headers/env.o 00:03:35.552 CXX test/cpp_headers/event.o 00:03:35.552 CC test/nvme/startup/startup.o 00:03:35.552 CXX test/cpp_headers/fd_group.o 00:03:35.552 CC test/nvme/reserve/reserve.o 00:03:35.552 CC examples/idxd/perf/perf.o 00:03:35.552 LINK led 00:03:35.552 CXX test/cpp_headers/fd.o 00:03:35.552 LINK startup 00:03:35.552 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:35.835 CC test/nvme/simple_copy/simple_copy.o 00:03:35.835 LINK reserve 00:03:35.835 CC test/nvme/connect_stress/connect_stress.o 00:03:35.835 LINK bdevio 00:03:35.835 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:35.835 CXX test/cpp_headers/file.o 00:03:35.835 CXX test/cpp_headers/fsdev.o 00:03:35.835 CC test/nvme/boot_partition/boot_partition.o 00:03:35.835 LINK idxd_perf 00:03:35.835 LINK connect_stress 00:03:36.094 LINK simple_copy 00:03:36.094 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:36.094 CXX test/cpp_headers/fsdev_module.o 00:03:36.094 CC test/nvme/compliance/nvme_compliance.o 00:03:36.094 LINK boot_partition 00:03:36.094 CC test/nvme/fused_ordering/fused_ordering.o 00:03:36.094 CXX test/cpp_headers/ftl.o 00:03:36.094 CXX test/cpp_headers/fuse_dispatcher.o 00:03:36.094 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:36.353 LINK hello_fsdev 00:03:36.353 LINK vhost_fuzz 00:03:36.353 CXX test/cpp_headers/gpt_spec.o 00:03:36.353 LINK fused_ordering 00:03:36.353 LINK doorbell_aers 00:03:36.353 CC examples/accel/perf/accel_perf.o 00:03:36.353 LINK nvme_compliance 00:03:36.353 CC examples/blob/hello_world/hello_blob.o 00:03:36.611 CC examples/nvme/hello_world/hello_world.o 00:03:36.611 CXX test/cpp_headers/hexlify.o 00:03:36.611 CC examples/nvme/reconnect/reconnect.o 00:03:36.611 CC test/nvme/fdp/fdp.o 00:03:36.612 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:36.612 CXX test/cpp_headers/histogram_data.o 00:03:36.612 CC examples/nvme/arbitration/arbitration.o 00:03:36.612 LINK hello_blob 00:03:36.612 CC examples/nvme/hotplug/hotplug.o 00:03:36.612 LINK hello_world 00:03:36.870 CXX test/cpp_headers/idxd.o 00:03:36.870 LINK reconnect 00:03:37.129 LINK hotplug 00:03:37.129 LINK fdp 00:03:37.129 LINK accel_perf 00:03:37.129 CXX test/cpp_headers/idxd_spec.o 00:03:37.129 LINK arbitration 00:03:37.129 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:37.129 CC examples/blob/cli/blobcli.o 00:03:37.129 CXX test/cpp_headers/init.o 00:03:37.129 LINK nvme_manage 00:03:37.386 CC examples/nvme/abort/abort.o 00:03:37.386 CXX test/cpp_headers/ioat.o 00:03:37.386 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:37.386 CC test/nvme/cuse/cuse.o 00:03:37.386 LINK cmb_copy 00:03:37.386 CXX test/cpp_headers/ioat_spec.o 00:03:37.386 CXX test/cpp_headers/iscsi_spec.o 00:03:37.644 CC examples/bdev/hello_world/hello_bdev.o 00:03:37.644 LINK pmr_persistence 00:03:37.644 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.644 CXX test/cpp_headers/json.o 00:03:37.644 CXX test/cpp_headers/jsonrpc.o 00:03:37.644 CXX test/cpp_headers/keyring.o 00:03:37.644 LINK abort 00:03:37.902 CXX test/cpp_headers/keyring_module.o 00:03:37.902 LINK blobcli 00:03:37.902 CXX test/cpp_headers/likely.o 00:03:37.902 CXX test/cpp_headers/log.o 00:03:37.902 CXX test/cpp_headers/lvol.o 00:03:37.902 LINK hello_bdev 00:03:38.160 CXX test/cpp_headers/md5.o 00:03:38.160 CXX test/cpp_headers/memory.o 00:03:38.160 CXX test/cpp_headers/mmio.o 00:03:38.160 CXX test/cpp_headers/nbd.o 00:03:38.160 CXX test/cpp_headers/net.o 00:03:38.160 CXX test/cpp_headers/notify.o 00:03:38.160 CXX test/cpp_headers/nvme.o 00:03:38.160 CXX test/cpp_headers/nvme_intel.o 00:03:38.160 CXX test/cpp_headers/nvme_ocssd.o 00:03:38.418 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.418 CXX test/cpp_headers/nvme_spec.o 00:03:38.418 CXX test/cpp_headers/nvme_zns.o 00:03:38.418 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.418 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.418 CXX test/cpp_headers/nvmf.o 00:03:38.418 CXX test/cpp_headers/nvmf_spec.o 00:03:38.418 CXX test/cpp_headers/nvmf_transport.o 00:03:38.418 CXX test/cpp_headers/opal.o 00:03:38.418 CXX test/cpp_headers/opal_spec.o 00:03:38.676 CXX test/cpp_headers/pci_ids.o 00:03:38.676 CXX test/cpp_headers/pipe.o 00:03:38.676 CXX test/cpp_headers/queue.o 00:03:38.676 CXX test/cpp_headers/reduce.o 00:03:38.676 CXX test/cpp_headers/rpc.o 00:03:38.676 LINK bdevperf 00:03:38.676 CXX test/cpp_headers/scheduler.o 00:03:38.676 CXX test/cpp_headers/scsi.o 00:03:38.676 CXX test/cpp_headers/scsi_spec.o 00:03:38.676 CXX test/cpp_headers/sock.o 00:03:38.676 CXX test/cpp_headers/stdinc.o 00:03:38.936 CXX test/cpp_headers/string.o 00:03:38.936 CXX test/cpp_headers/thread.o 00:03:38.936 CXX test/cpp_headers/trace.o 00:03:38.936 CXX test/cpp_headers/trace_parser.o 00:03:38.936 CXX test/cpp_headers/tree.o 00:03:38.936 CXX test/cpp_headers/ublk.o 00:03:38.936 CXX test/cpp_headers/util.o 00:03:38.936 CXX test/cpp_headers/uuid.o 00:03:38.936 LINK cuse 00:03:38.936 CXX test/cpp_headers/version.o 00:03:38.936 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.194 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.194 CXX test/cpp_headers/vhost.o 00:03:39.194 CXX test/cpp_headers/vmd.o 00:03:39.194 CXX test/cpp_headers/xor.o 00:03:39.194 CXX test/cpp_headers/zipf.o 00:03:39.194 CC examples/nvmf/nvmf/nvmf.o 00:03:39.454 LINK nvmf 00:03:40.833 LINK esnap 00:03:41.402 00:03:41.402 real 1m30.394s 00:03:41.402 user 7m40.631s 00:03:41.402 sys 2m1.207s 00:03:41.402 10:47:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:41.402 10:47:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:41.402 ************************************ 00:03:41.402 END TEST make 00:03:41.402 ************************************ 00:03:41.402 10:47:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.402 10:47:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:41.402 10:47:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:41.402 10:47:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.402 10:47:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.402 10:47:28 -- pm/common@44 -- $ pid=5285 00:03:41.402 10:47:28 -- pm/common@50 -- $ kill -TERM 5285 00:03:41.402 10:47:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.402 10:47:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.402 10:47:28 -- pm/common@44 -- $ pid=5287 00:03:41.402 10:47:28 -- pm/common@50 -- $ kill -TERM 5287 00:03:41.402 10:47:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:41.402 10:47:28 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:41.662 10:47:28 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:41.662 10:47:28 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:41.662 10:47:28 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:41.662 10:47:28 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:41.662 10:47:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:41.662 10:47:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:41.662 10:47:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:41.662 10:47:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:41.662 10:47:28 -- scripts/common.sh@336 -- # read -ra ver1 00:03:41.662 10:47:28 -- scripts/common.sh@337 -- # IFS=.-: 00:03:41.662 10:47:28 -- scripts/common.sh@337 -- # read -ra ver2 00:03:41.662 10:47:28 -- scripts/common.sh@338 -- # local 'op=<' 00:03:41.662 10:47:28 -- scripts/common.sh@340 -- # ver1_l=2 00:03:41.662 10:47:28 -- scripts/common.sh@341 -- # ver2_l=1 00:03:41.662 10:47:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:41.662 10:47:28 -- scripts/common.sh@344 -- # case "$op" in 00:03:41.662 10:47:28 -- scripts/common.sh@345 -- # : 1 00:03:41.662 10:47:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:41.662 10:47:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:41.662 10:47:28 -- scripts/common.sh@365 -- # decimal 1 00:03:41.662 10:47:28 -- scripts/common.sh@353 -- # local d=1 00:03:41.662 10:47:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:41.662 10:47:28 -- scripts/common.sh@355 -- # echo 1 00:03:41.662 10:47:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:41.662 10:47:28 -- scripts/common.sh@366 -- # decimal 2 00:03:41.662 10:47:28 -- scripts/common.sh@353 -- # local d=2 00:03:41.662 10:47:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:41.662 10:47:28 -- scripts/common.sh@355 -- # echo 2 00:03:41.662 10:47:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:41.662 10:47:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:41.662 10:47:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:41.662 10:47:28 -- scripts/common.sh@368 -- # return 0 00:03:41.662 10:47:28 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:41.662 10:47:28 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.662 --rc genhtml_branch_coverage=1 00:03:41.662 --rc genhtml_function_coverage=1 00:03:41.662 --rc genhtml_legend=1 00:03:41.662 --rc geninfo_all_blocks=1 00:03:41.662 --rc geninfo_unexecuted_blocks=1 00:03:41.662 00:03:41.662 ' 00:03:41.662 10:47:28 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.662 --rc genhtml_branch_coverage=1 00:03:41.662 --rc genhtml_function_coverage=1 00:03:41.662 --rc genhtml_legend=1 00:03:41.662 --rc geninfo_all_blocks=1 00:03:41.662 --rc geninfo_unexecuted_blocks=1 00:03:41.662 00:03:41.662 ' 00:03:41.662 10:47:28 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.662 --rc genhtml_branch_coverage=1 00:03:41.662 --rc genhtml_function_coverage=1 00:03:41.662 --rc genhtml_legend=1 00:03:41.662 --rc geninfo_all_blocks=1 00:03:41.662 --rc geninfo_unexecuted_blocks=1 00:03:41.662 00:03:41.662 ' 00:03:41.662 10:47:28 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:41.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:41.662 --rc genhtml_branch_coverage=1 00:03:41.662 --rc genhtml_function_coverage=1 00:03:41.662 --rc genhtml_legend=1 00:03:41.662 --rc geninfo_all_blocks=1 00:03:41.662 --rc geninfo_unexecuted_blocks=1 00:03:41.662 00:03:41.662 ' 00:03:41.662 10:47:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:41.662 10:47:28 -- nvmf/common.sh@7 -- # uname -s 00:03:41.662 10:47:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.662 10:47:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.662 10:47:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.662 10:47:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.662 10:47:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.662 10:47:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.662 10:47:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.662 10:47:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.662 10:47:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.662 10:47:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.662 10:47:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1c0aaf4c-a905-4d21-869d-96349a84a203 00:03:41.662 10:47:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=1c0aaf4c-a905-4d21-869d-96349a84a203 00:03:41.662 10:47:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.662 10:47:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.662 10:47:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.662 10:47:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.662 10:47:28 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:41.662 10:47:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:41.662 10:47:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.662 10:47:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.662 10:47:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.662 10:47:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.662 10:47:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.662 10:47:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.662 10:47:28 -- paths/export.sh@5 -- # export PATH 00:03:41.662 10:47:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.662 10:47:28 -- nvmf/common.sh@51 -- # : 0 00:03:41.662 10:47:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:41.662 10:47:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:41.662 10:47:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.662 10:47:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.662 10:47:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.662 10:47:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:41.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:41.662 10:47:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:41.662 10:47:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:41.662 10:47:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:41.662 10:47:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.662 10:47:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.662 10:47:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.662 10:47:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.662 10:47:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.662 10:47:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.662 10:47:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.662 10:47:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.922 10:47:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.922 10:47:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.922 10:47:28 -- spdk/autotest.sh@48 -- # udevadm_pid=54839 00:03:41.922 10:47:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.922 10:47:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.922 10:47:28 -- pm/common@17 -- # local monitor 00:03:41.922 10:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.922 10:47:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.922 10:47:28 -- pm/common@25 -- # sleep 1 00:03:41.922 10:47:28 -- pm/common@21 -- # date +%s 00:03:41.922 10:47:28 -- pm/common@21 -- # date +%s 00:03:41.922 10:47:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731667648 00:03:41.922 10:47:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731667648 00:03:41.922 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731667648_collect-vmstat.pm.log 00:03:41.922 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731667648_collect-cpu-load.pm.log 00:03:42.868 10:47:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.868 10:47:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.868 10:47:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:42.868 10:47:29 -- common/autotest_common.sh@10 -- # set +x 00:03:42.868 10:47:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.868 10:47:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:42.868 10:47:29 -- common/autotest_common.sh@10 -- # set +x 00:03:42.868 10:47:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.868 10:47:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.868 10:47:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.868 10:47:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.868 10:47:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.868 10:47:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.868 10:47:29 -- common/autotest_common.sh@1457 -- # uname 00:03:42.868 10:47:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:42.868 10:47:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.868 10:47:29 -- common/autotest_common.sh@1477 -- # uname 00:03:42.868 10:47:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:42.868 10:47:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:42.868 10:47:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:43.128 lcov: LCOV version 1.15 00:03:43.128 10:47:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:01.270 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:01.270 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:16.168 10:48:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:16.168 10:48:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.168 10:48:00 -- common/autotest_common.sh@10 -- # set +x 00:04:16.168 10:48:00 -- spdk/autotest.sh@78 -- # rm -f 00:04:16.168 10:48:00 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:16.168 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:16.168 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:16.168 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:16.168 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:16.168 10:48:02 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:16.168 10:48:02 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:16.168 10:48:02 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:16.168 10:48:02 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:16.169 10:48:02 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:16.169 10:48:02 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:16.169 10:48:02 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:16.169 10:48:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.169 10:48:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:16.169 10:48:02 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:16.169 10:48:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.169 No valid GPT data, bailing 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # pt= 00:04:16.169 10:48:02 -- scripts/common.sh@395 -- # return 1 00:04:16.169 10:48:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.169 1+0 records in 00:04:16.169 1+0 records out 00:04:16.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200397 s, 52.3 MB/s 00:04:16.169 10:48:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.169 10:48:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:16.169 10:48:02 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:16.169 10:48:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:16.169 No valid GPT data, bailing 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # pt= 00:04:16.169 10:48:02 -- scripts/common.sh@395 -- # return 1 00:04:16.169 10:48:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:16.169 1+0 records in 00:04:16.169 1+0 records out 00:04:16.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419283 s, 250 MB/s 00:04:16.169 10:48:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.169 10:48:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:16.169 10:48:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:16.169 10:48:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:16.169 No valid GPT data, bailing 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # pt= 00:04:16.169 10:48:02 -- scripts/common.sh@395 -- # return 1 00:04:16.169 10:48:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:16.169 1+0 records in 00:04:16.169 1+0 records out 00:04:16.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00648034 s, 162 MB/s 00:04:16.169 10:48:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.169 10:48:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:16.169 10:48:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:16.169 10:48:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:16.169 No valid GPT data, bailing 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # pt= 00:04:16.169 10:48:02 -- scripts/common.sh@395 -- # return 1 00:04:16.169 10:48:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:16.169 1+0 records in 00:04:16.169 1+0 records out 00:04:16.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00418013 s, 251 MB/s 00:04:16.169 10:48:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.169 10:48:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:16.169 10:48:02 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:16.169 10:48:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:16.169 No valid GPT data, bailing 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # pt= 00:04:16.169 10:48:02 -- scripts/common.sh@395 -- # return 1 00:04:16.169 10:48:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:16.169 1+0 records in 00:04:16.169 1+0 records out 00:04:16.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00427585 s, 245 MB/s 00:04:16.169 10:48:02 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.169 10:48:02 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:16.169 10:48:02 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:16.169 10:48:02 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:16.169 10:48:02 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:16.169 No valid GPT data, bailing 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:16.169 10:48:02 -- scripts/common.sh@394 -- # pt= 00:04:16.169 10:48:02 -- scripts/common.sh@395 -- # return 1 00:04:16.169 10:48:02 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:16.169 1+0 records in 00:04:16.169 1+0 records out 00:04:16.169 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494298 s, 212 MB/s 00:04:16.169 10:48:02 -- spdk/autotest.sh@105 -- # sync 00:04:16.169 10:48:02 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.169 10:48:02 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.169 10:48:02 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.451 10:48:06 -- spdk/autotest.sh@111 -- # uname -s 00:04:19.451 10:48:06 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:19.451 10:48:06 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:19.451 10:48:06 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:20.018 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:20.584 Hugepages 00:04:20.584 node hugesize free / total 00:04:20.584 node0 1048576kB 0 / 0 00:04:20.584 node0 2048kB 0 / 0 00:04:20.584 00:04:20.584 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.842 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:20.842 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:21.101 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:21.101 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:21.361 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:21.361 10:48:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:21.361 10:48:08 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:21.361 10:48:08 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:21.361 10:48:08 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.929 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.867 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.867 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.867 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.867 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.867 10:48:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:24.247 10:48:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:24.247 10:48:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:24.247 10:48:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:24.247 10:48:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:24.247 10:48:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.247 10:48:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.247 10:48:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.247 10:48:10 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.247 10:48:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.247 10:48:10 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:24.247 10:48:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:24.247 10:48:10 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.506 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.766 Waiting for block devices as requested 00:04:24.766 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.025 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.025 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:25.283 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:30.563 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:30.563 10:48:16 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.563 10:48:16 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:30.563 10:48:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:30.563 10:48:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.563 10:48:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.563 10:48:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:30.563 10:48:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:30.563 10:48:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:30.563 10:48:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:30.563 10:48:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:30.563 10:48:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.563 10:48:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:30.563 10:48:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.563 10:48:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.563 10:48:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.563 10:48:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.563 10:48:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:30.563 10:48:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.563 10:48:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.563 10:48:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.563 10:48:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.563 10:48:17 -- common/autotest_common.sh@1543 -- # continue 00:04:30.564 10:48:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.564 10:48:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.564 10:48:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.564 10:48:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.564 10:48:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1543 -- # continue 00:04:30.564 10:48:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.564 10:48:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.564 10:48:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.564 10:48:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.564 10:48:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1543 -- # continue 00:04:30.564 10:48:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:30.564 10:48:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:30.564 10:48:17 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:30.564 10:48:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:30.564 10:48:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:30.564 10:48:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:30.564 10:48:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:30.564 10:48:17 -- common/autotest_common.sh@1543 -- # continue 00:04:30.564 10:48:17 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:30.564 10:48:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.564 10:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:30.564 10:48:17 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:30.564 10:48:17 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.564 10:48:17 -- common/autotest_common.sh@10 -- # set +x 00:04:30.564 10:48:17 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.071 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.071 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.071 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.071 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.330 10:48:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:32.330 10:48:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.330 10:48:18 -- common/autotest_common.sh@10 -- # set +x 00:04:32.330 10:48:19 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:32.330 10:48:19 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:32.330 10:48:19 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.330 10:48:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:32.330 10:48:19 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:32.330 10:48:19 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:32.330 10:48:19 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:32.330 10:48:19 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:32.330 10:48:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:32.330 10:48:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:32.330 10:48:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.330 10:48:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.330 10:48:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:32.330 10:48:19 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:32.330 10:48:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:32.330 10:48:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.330 10:48:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.330 10:48:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.330 10:48:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.330 10:48:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.330 10:48:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.330 10:48:19 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:32.330 10:48:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:32.330 10:48:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.330 10:48:19 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:32.330 10:48:19 -- common/autotest_common.sh@1572 -- # return 0 00:04:32.330 10:48:19 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:32.330 10:48:19 -- common/autotest_common.sh@1580 -- # return 0 00:04:32.330 10:48:19 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.330 10:48:19 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.330 10:48:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.330 10:48:19 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.330 10:48:19 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.330 10:48:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.330 10:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:32.588 10:48:19 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.588 10:48:19 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.588 10:48:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.588 10:48:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.588 10:48:19 -- common/autotest_common.sh@10 -- # set +x 00:04:32.588 ************************************ 00:04:32.588 START TEST env 00:04:32.588 ************************************ 00:04:32.588 10:48:19 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.588 * Looking for test storage... 00:04:32.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:32.588 10:48:19 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:32.588 10:48:19 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:32.588 10:48:19 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:32.588 10:48:19 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:32.588 10:48:19 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.588 10:48:19 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.588 10:48:19 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.588 10:48:19 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.588 10:48:19 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.588 10:48:19 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.589 10:48:19 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.589 10:48:19 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.589 10:48:19 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.589 10:48:19 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.589 10:48:19 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.589 10:48:19 env -- scripts/common.sh@344 -- # case "$op" in 00:04:32.589 10:48:19 env -- scripts/common.sh@345 -- # : 1 00:04:32.589 10:48:19 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.589 10:48:19 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.589 10:48:19 env -- scripts/common.sh@365 -- # decimal 1 00:04:32.589 10:48:19 env -- scripts/common.sh@353 -- # local d=1 00:04:32.589 10:48:19 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.589 10:48:19 env -- scripts/common.sh@355 -- # echo 1 00:04:32.589 10:48:19 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.848 10:48:19 env -- scripts/common.sh@366 -- # decimal 2 00:04:32.848 10:48:19 env -- scripts/common.sh@353 -- # local d=2 00:04:32.848 10:48:19 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.848 10:48:19 env -- scripts/common.sh@355 -- # echo 2 00:04:32.848 10:48:19 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.848 10:48:19 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.848 10:48:19 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.848 10:48:19 env -- scripts/common.sh@368 -- # return 0 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:32.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.848 --rc genhtml_branch_coverage=1 00:04:32.848 --rc genhtml_function_coverage=1 00:04:32.848 --rc genhtml_legend=1 00:04:32.848 --rc geninfo_all_blocks=1 00:04:32.848 --rc geninfo_unexecuted_blocks=1 00:04:32.848 00:04:32.848 ' 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:32.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.848 --rc genhtml_branch_coverage=1 00:04:32.848 --rc genhtml_function_coverage=1 00:04:32.848 --rc genhtml_legend=1 00:04:32.848 --rc geninfo_all_blocks=1 00:04:32.848 --rc geninfo_unexecuted_blocks=1 00:04:32.848 00:04:32.848 ' 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:32.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.848 --rc genhtml_branch_coverage=1 00:04:32.848 --rc genhtml_function_coverage=1 00:04:32.848 --rc genhtml_legend=1 00:04:32.848 --rc geninfo_all_blocks=1 00:04:32.848 --rc geninfo_unexecuted_blocks=1 00:04:32.848 00:04:32.848 ' 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:32.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.848 --rc genhtml_branch_coverage=1 00:04:32.848 --rc genhtml_function_coverage=1 00:04:32.848 --rc genhtml_legend=1 00:04:32.848 --rc geninfo_all_blocks=1 00:04:32.848 --rc geninfo_unexecuted_blocks=1 00:04:32.848 00:04:32.848 ' 00:04:32.848 10:48:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.848 10:48:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.848 10:48:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:32.848 ************************************ 00:04:32.848 START TEST env_memory 00:04:32.848 ************************************ 00:04:32.848 10:48:19 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:32.848 00:04:32.848 00:04:32.848 CUnit - A unit testing framework for C - Version 2.1-3 00:04:32.848 http://cunit.sourceforge.net/ 00:04:32.848 00:04:32.848 00:04:32.848 Suite: memory 00:04:32.848 Test: alloc and free memory map ...[2024-11-15 10:48:19.562361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:32.848 passed 00:04:32.848 Test: mem map translation ...[2024-11-15 10:48:19.613573] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:32.848 [2024-11-15 10:48:19.613769] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:32.848 [2024-11-15 10:48:19.613946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:32.848 [2024-11-15 10:48:19.614128] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:32.848 passed 00:04:32.848 Test: mem map registration ...[2024-11-15 10:48:19.692048] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:32.848 [2024-11-15 10:48:19.692120] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:33.108 passed 00:04:33.108 Test: mem map adjacent registrations ...passed 00:04:33.108 00:04:33.108 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.108 suites 1 1 n/a 0 0 00:04:33.108 tests 4 4 4 0 0 00:04:33.108 asserts 152 152 152 0 n/a 00:04:33.108 00:04:33.108 Elapsed time = 0.285 seconds 00:04:33.108 00:04:33.108 real 0m0.345s 00:04:33.108 user 0m0.297s 00:04:33.108 sys 0m0.036s 00:04:33.108 ************************************ 00:04:33.108 END TEST env_memory 00:04:33.108 ************************************ 00:04:33.108 10:48:19 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.108 10:48:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.108 10:48:19 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.108 10:48:19 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.108 10:48:19 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.108 10:48:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.108 ************************************ 00:04:33.108 START TEST env_vtophys 00:04:33.108 ************************************ 00:04:33.108 10:48:19 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.108 EAL: lib.eal log level changed from notice to debug 00:04:33.108 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 1 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 2 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 3 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 4 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 5 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 6 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 7 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 8 as core 0 on socket 0 00:04:33.108 EAL: Detected lcore 9 as core 0 on socket 0 00:04:33.108 EAL: Maximum logical cores by configuration: 128 00:04:33.108 EAL: Detected CPU lcores: 10 00:04:33.108 EAL: Detected NUMA nodes: 1 00:04:33.108 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:33.108 EAL: Detected shared linkage of DPDK 00:04:33.108 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.367 EAL: Selected IOVA mode 'PA' 00:04:33.367 EAL: Probing VFIO support... 00:04:33.367 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.367 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:33.367 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.367 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.367 EAL: Setting up physically contiguous memory... 00:04:33.367 EAL: Setting maximum number of open files to 524288 00:04:33.367 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.367 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.367 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.367 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.367 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.367 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.367 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.367 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.367 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.367 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.367 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.367 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.367 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.367 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.367 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.367 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.367 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.367 EAL: Hugepages will be freed exactly as allocated. 00:04:33.367 EAL: No shared files mode enabled, IPC is disabled 00:04:33.367 EAL: No shared files mode enabled, IPC is disabled 00:04:33.367 EAL: TSC frequency is ~2490000 KHz 00:04:33.367 EAL: Main lcore 0 is ready (tid=7fe4ce61da40;cpuset=[0]) 00:04:33.367 EAL: Trying to obtain current memory policy. 00:04:33.367 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.367 EAL: Restoring previous memory policy: 0 00:04:33.367 EAL: request: mp_malloc_sync 00:04:33.367 EAL: No shared files mode enabled, IPC is disabled 00:04:33.367 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.367 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.367 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.367 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.367 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:33.367 00:04:33.367 00:04:33.367 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.367 http://cunit.sourceforge.net/ 00:04:33.367 00:04:33.367 00:04:33.367 Suite: components_suite 00:04:33.936 Test: vtophys_malloc_test ...passed 00:04:33.936 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:33.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.936 EAL: Restoring previous memory policy: 4 00:04:33.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.936 EAL: request: mp_malloc_sync 00:04:33.936 EAL: No shared files mode enabled, IPC is disabled 00:04:33.936 EAL: Heap on socket 0 was expanded by 4MB 00:04:33.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.936 EAL: request: mp_malloc_sync 00:04:33.936 EAL: No shared files mode enabled, IPC is disabled 00:04:33.936 EAL: Heap on socket 0 was shrunk by 4MB 00:04:33.936 EAL: Trying to obtain current memory policy. 00:04:33.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.936 EAL: Restoring previous memory policy: 4 00:04:33.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.936 EAL: request: mp_malloc_sync 00:04:33.936 EAL: No shared files mode enabled, IPC is disabled 00:04:33.936 EAL: Heap on socket 0 was expanded by 6MB 00:04:33.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.936 EAL: request: mp_malloc_sync 00:04:33.936 EAL: No shared files mode enabled, IPC is disabled 00:04:33.936 EAL: Heap on socket 0 was shrunk by 6MB 00:04:33.936 EAL: Trying to obtain current memory policy. 00:04:33.936 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.936 EAL: Restoring previous memory policy: 4 00:04:33.936 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.936 EAL: request: mp_malloc_sync 00:04:33.936 EAL: No shared files mode enabled, IPC is disabled 00:04:33.937 EAL: Heap on socket 0 was expanded by 10MB 00:04:33.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.937 EAL: request: mp_malloc_sync 00:04:33.937 EAL: No shared files mode enabled, IPC is disabled 00:04:33.937 EAL: Heap on socket 0 was shrunk by 10MB 00:04:33.937 EAL: Trying to obtain current memory policy. 00:04:33.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.937 EAL: Restoring previous memory policy: 4 00:04:33.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.937 EAL: request: mp_malloc_sync 00:04:33.937 EAL: No shared files mode enabled, IPC is disabled 00:04:33.937 EAL: Heap on socket 0 was expanded by 18MB 00:04:33.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.937 EAL: request: mp_malloc_sync 00:04:33.937 EAL: No shared files mode enabled, IPC is disabled 00:04:33.937 EAL: Heap on socket 0 was shrunk by 18MB 00:04:33.937 EAL: Trying to obtain current memory policy. 00:04:33.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.937 EAL: Restoring previous memory policy: 4 00:04:33.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.937 EAL: request: mp_malloc_sync 00:04:33.937 EAL: No shared files mode enabled, IPC is disabled 00:04:33.937 EAL: Heap on socket 0 was expanded by 34MB 00:04:33.937 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.937 EAL: request: mp_malloc_sync 00:04:33.937 EAL: No shared files mode enabled, IPC is disabled 00:04:33.937 EAL: Heap on socket 0 was shrunk by 34MB 00:04:33.937 EAL: Trying to obtain current memory policy. 00:04:33.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.196 EAL: Restoring previous memory policy: 4 00:04:34.196 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.196 EAL: request: mp_malloc_sync 00:04:34.196 EAL: No shared files mode enabled, IPC is disabled 00:04:34.196 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.196 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.196 EAL: request: mp_malloc_sync 00:04:34.196 EAL: No shared files mode enabled, IPC is disabled 00:04:34.196 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.196 EAL: Trying to obtain current memory policy. 00:04:34.196 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.196 EAL: Restoring previous memory policy: 4 00:04:34.196 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.196 EAL: request: mp_malloc_sync 00:04:34.456 EAL: No shared files mode enabled, IPC is disabled 00:04:34.456 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.456 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.456 EAL: request: mp_malloc_sync 00:04:34.456 EAL: No shared files mode enabled, IPC is disabled 00:04:34.456 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.715 EAL: Trying to obtain current memory policy. 00:04:34.715 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.715 EAL: Restoring previous memory policy: 4 00:04:34.715 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.715 EAL: request: mp_malloc_sync 00:04:34.715 EAL: No shared files mode enabled, IPC is disabled 00:04:34.715 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.284 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.284 EAL: request: mp_malloc_sync 00:04:35.284 EAL: No shared files mode enabled, IPC is disabled 00:04:35.284 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.851 EAL: Trying to obtain current memory policy. 00:04:35.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.851 EAL: Restoring previous memory policy: 4 00:04:35.851 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.851 EAL: request: mp_malloc_sync 00:04:35.851 EAL: No shared files mode enabled, IPC is disabled 00:04:35.851 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.790 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.790 EAL: request: mp_malloc_sync 00:04:36.790 EAL: No shared files mode enabled, IPC is disabled 00:04:36.790 EAL: Heap on socket 0 was shrunk by 514MB 00:04:37.728 EAL: Trying to obtain current memory policy. 00:04:37.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:37.999 EAL: Restoring previous memory policy: 4 00:04:37.999 EAL: Calling mem event callback 'spdk:(nil)' 00:04:37.999 EAL: request: mp_malloc_sync 00:04:37.999 EAL: No shared files mode enabled, IPC is disabled 00:04:37.999 EAL: Heap on socket 0 was expanded by 1026MB 00:04:39.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.953 EAL: request: mp_malloc_sync 00:04:39.953 EAL: No shared files mode enabled, IPC is disabled 00:04:39.953 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:41.859 passed 00:04:41.859 00:04:41.859 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.859 suites 1 1 n/a 0 0 00:04:41.859 tests 2 2 2 0 0 00:04:41.859 asserts 5663 5663 5663 0 n/a 00:04:41.859 00:04:41.859 Elapsed time = 8.107 seconds 00:04:41.859 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.859 EAL: request: mp_malloc_sync 00:04:41.859 EAL: No shared files mode enabled, IPC is disabled 00:04:41.859 EAL: Heap on socket 0 was shrunk by 2MB 00:04:41.859 EAL: No shared files mode enabled, IPC is disabled 00:04:41.859 EAL: No shared files mode enabled, IPC is disabled 00:04:41.859 EAL: No shared files mode enabled, IPC is disabled 00:04:41.859 00:04:41.859 real 0m8.455s 00:04:41.859 user 0m7.422s 00:04:41.859 sys 0m0.865s 00:04:41.859 10:48:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.859 ************************************ 00:04:41.859 END TEST env_vtophys 00:04:41.859 ************************************ 00:04:41.859 10:48:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 10:48:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.859 10:48:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.859 10:48:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.859 10:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.859 ************************************ 00:04:41.859 START TEST env_pci 00:04:41.859 ************************************ 00:04:41.860 10:48:28 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:41.860 00:04:41.860 00:04:41.860 CUnit - A unit testing framework for C - Version 2.1-3 00:04:41.860 http://cunit.sourceforge.net/ 00:04:41.860 00:04:41.860 00:04:41.860 Suite: pci 00:04:41.860 Test: pci_hook ...[2024-11-15 10:48:28.447747] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57709 has claimed it 00:04:41.860 passed 00:04:41.860 00:04:41.860 Run Summary: Type Total Ran Passed Failed Inactive 00:04:41.860 suites 1 1 n/a 0 0 00:04:41.860 tests 1 1 1 0 0 00:04:41.860 asserts 25 25 25 0 n/a 00:04:41.860 00:04:41.860 Elapsed time = 0.008 seconds 00:04:41.860 EAL: Cannot find device (10000:00:01.0) 00:04:41.860 EAL: Failed to attach device on primary process 00:04:41.860 00:04:41.860 real 0m0.112s 00:04:41.860 user 0m0.049s 00:04:41.860 sys 0m0.061s 00:04:41.860 10:48:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.860 ************************************ 00:04:41.860 END TEST env_pci 00:04:41.860 ************************************ 00:04:41.860 10:48:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:41.860 10:48:28 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:41.860 10:48:28 env -- env/env.sh@15 -- # uname 00:04:41.860 10:48:28 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:41.860 10:48:28 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:41.860 10:48:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.860 10:48:28 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:41.860 10:48:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.860 10:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:41.860 ************************************ 00:04:41.860 START TEST env_dpdk_post_init 00:04:41.860 ************************************ 00:04:41.860 10:48:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:41.860 EAL: Detected CPU lcores: 10 00:04:41.860 EAL: Detected NUMA nodes: 1 00:04:41.860 EAL: Detected shared linkage of DPDK 00:04:41.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:41.860 EAL: Selected IOVA mode 'PA' 00:04:42.119 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.119 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:42.119 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:42.119 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:42.119 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:42.119 Starting DPDK initialization... 00:04:42.119 Starting SPDK post initialization... 00:04:42.119 SPDK NVMe probe 00:04:42.119 Attaching to 0000:00:10.0 00:04:42.119 Attaching to 0000:00:11.0 00:04:42.119 Attaching to 0000:00:12.0 00:04:42.119 Attaching to 0000:00:13.0 00:04:42.119 Attached to 0000:00:10.0 00:04:42.119 Attached to 0000:00:11.0 00:04:42.119 Attached to 0000:00:13.0 00:04:42.119 Attached to 0000:00:12.0 00:04:42.119 Cleaning up... 00:04:42.119 00:04:42.119 real 0m0.311s 00:04:42.119 user 0m0.117s 00:04:42.119 sys 0m0.096s 00:04:42.119 10:48:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.119 ************************************ 00:04:42.119 END TEST env_dpdk_post_init 00:04:42.119 ************************************ 00:04:42.119 10:48:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.119 10:48:28 env -- env/env.sh@26 -- # uname 00:04:42.119 10:48:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:42.119 10:48:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.119 10:48:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.119 10:48:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.119 10:48:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.379 ************************************ 00:04:42.379 START TEST env_mem_callbacks 00:04:42.379 ************************************ 00:04:42.379 10:48:28 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:42.379 EAL: Detected CPU lcores: 10 00:04:42.379 EAL: Detected NUMA nodes: 1 00:04:42.379 EAL: Detected shared linkage of DPDK 00:04:42.379 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:42.379 EAL: Selected IOVA mode 'PA' 00:04:42.379 00:04:42.379 00:04:42.379 CUnit - A unit testing framework for C - Version 2.1-3 00:04:42.379 http://cunit.sourceforge.net/ 00:04:42.379 00:04:42.379 00:04:42.379 Suite: memory 00:04:42.379 Test: test ... 00:04:42.379 register 0x200000200000 2097152 00:04:42.379 malloc 3145728 00:04:42.379 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:42.379 register 0x200000400000 4194304 00:04:42.379 buf 0x2000004fffc0 len 3145728 PASSED 00:04:42.379 malloc 64 00:04:42.379 buf 0x2000004ffec0 len 64 PASSED 00:04:42.379 malloc 4194304 00:04:42.379 register 0x200000800000 6291456 00:04:42.379 buf 0x2000009fffc0 len 4194304 PASSED 00:04:42.379 free 0x2000004fffc0 3145728 00:04:42.379 free 0x2000004ffec0 64 00:04:42.379 unregister 0x200000400000 4194304 PASSED 00:04:42.379 free 0x2000009fffc0 4194304 00:04:42.379 unregister 0x200000800000 6291456 PASSED 00:04:42.379 malloc 8388608 00:04:42.379 register 0x200000400000 10485760 00:04:42.379 buf 0x2000005fffc0 len 8388608 PASSED 00:04:42.379 free 0x2000005fffc0 8388608 00:04:42.639 unregister 0x200000400000 10485760 PASSED 00:04:42.639 passed 00:04:42.639 00:04:42.639 Run Summary: Type Total Ran Passed Failed Inactive 00:04:42.639 suites 1 1 n/a 0 0 00:04:42.639 tests 1 1 1 0 0 00:04:42.639 asserts 15 15 15 0 n/a 00:04:42.639 00:04:42.639 Elapsed time = 0.081 seconds 00:04:42.639 00:04:42.639 real 0m0.294s 00:04:42.639 user 0m0.112s 00:04:42.639 sys 0m0.079s 00:04:42.639 10:48:29 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.639 10:48:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:42.639 ************************************ 00:04:42.639 END TEST env_mem_callbacks 00:04:42.639 ************************************ 00:04:42.639 ************************************ 00:04:42.639 END TEST env 00:04:42.639 ************************************ 00:04:42.639 00:04:42.639 real 0m10.136s 00:04:42.639 user 0m8.240s 00:04:42.639 sys 0m1.520s 00:04:42.639 10:48:29 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.639 10:48:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:42.639 10:48:29 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.639 10:48:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.639 10:48:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.639 10:48:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.639 ************************************ 00:04:42.639 START TEST rpc 00:04:42.639 ************************************ 00:04:42.639 10:48:29 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:42.899 * Looking for test storage... 00:04:42.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.899 10:48:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.899 10:48:29 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.899 10:48:29 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.899 10:48:29 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.899 10:48:29 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.899 10:48:29 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:42.899 10:48:29 rpc -- scripts/common.sh@345 -- # : 1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.899 10:48:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.899 10:48:29 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@353 -- # local d=1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.899 10:48:29 rpc -- scripts/common.sh@355 -- # echo 1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.899 10:48:29 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@353 -- # local d=2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.899 10:48:29 rpc -- scripts/common.sh@355 -- # echo 2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.899 10:48:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.899 10:48:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.899 10:48:29 rpc -- scripts/common.sh@368 -- # return 0 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.899 --rc genhtml_branch_coverage=1 00:04:42.899 --rc genhtml_function_coverage=1 00:04:42.899 --rc genhtml_legend=1 00:04:42.899 --rc geninfo_all_blocks=1 00:04:42.899 --rc geninfo_unexecuted_blocks=1 00:04:42.899 00:04:42.899 ' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.899 --rc genhtml_branch_coverage=1 00:04:42.899 --rc genhtml_function_coverage=1 00:04:42.899 --rc genhtml_legend=1 00:04:42.899 --rc geninfo_all_blocks=1 00:04:42.899 --rc geninfo_unexecuted_blocks=1 00:04:42.899 00:04:42.899 ' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.899 --rc genhtml_branch_coverage=1 00:04:42.899 --rc genhtml_function_coverage=1 00:04:42.899 --rc genhtml_legend=1 00:04:42.899 --rc geninfo_all_blocks=1 00:04:42.899 --rc geninfo_unexecuted_blocks=1 00:04:42.899 00:04:42.899 ' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.899 --rc genhtml_branch_coverage=1 00:04:42.899 --rc genhtml_function_coverage=1 00:04:42.899 --rc genhtml_legend=1 00:04:42.899 --rc geninfo_all_blocks=1 00:04:42.899 --rc geninfo_unexecuted_blocks=1 00:04:42.899 00:04:42.899 ' 00:04:42.899 10:48:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57836 00:04:42.899 10:48:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:42.899 10:48:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.899 10:48:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57836 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 57836 ']' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.899 10:48:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.158 [2024-11-15 10:48:29.757103] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:04:43.158 [2024-11-15 10:48:29.757238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57836 ] 00:04:43.158 [2024-11-15 10:48:29.942015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.418 [2024-11-15 10:48:30.059561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:43.418 [2024-11-15 10:48:30.059626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57836' to capture a snapshot of events at runtime. 00:04:43.418 [2024-11-15 10:48:30.059640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:43.418 [2024-11-15 10:48:30.059654] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:43.418 [2024-11-15 10:48:30.059664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57836 for offline analysis/debug. 00:04:43.418 [2024-11-15 10:48:30.060929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.363 10:48:30 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.363 10:48:30 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:44.363 10:48:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.363 10:48:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.363 10:48:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:44.363 10:48:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:44.363 10:48:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.363 10:48:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.363 10:48:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.363 ************************************ 00:04:44.363 START TEST rpc_integrity 00:04:44.363 ************************************ 00:04:44.363 10:48:30 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:44.363 10:48:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:44.363 10:48:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.363 10:48:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.363 10:48:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.363 10:48:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:44.363 10:48:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:44.363 { 00:04:44.363 "name": "Malloc0", 00:04:44.363 "aliases": [ 00:04:44.363 "0baba221-151e-4161-ad40-5f7df755137d" 00:04:44.363 ], 00:04:44.363 "product_name": "Malloc disk", 00:04:44.363 "block_size": 512, 00:04:44.363 "num_blocks": 16384, 00:04:44.363 "uuid": "0baba221-151e-4161-ad40-5f7df755137d", 00:04:44.363 "assigned_rate_limits": { 00:04:44.363 "rw_ios_per_sec": 0, 00:04:44.363 "rw_mbytes_per_sec": 0, 00:04:44.363 "r_mbytes_per_sec": 0, 00:04:44.363 "w_mbytes_per_sec": 0 00:04:44.363 }, 00:04:44.363 "claimed": false, 00:04:44.363 "zoned": false, 00:04:44.363 "supported_io_types": { 00:04:44.363 "read": true, 00:04:44.363 "write": true, 00:04:44.363 "unmap": true, 00:04:44.363 "flush": true, 00:04:44.363 "reset": true, 00:04:44.363 "nvme_admin": false, 00:04:44.363 "nvme_io": false, 00:04:44.363 "nvme_io_md": false, 00:04:44.363 "write_zeroes": true, 00:04:44.363 "zcopy": true, 00:04:44.363 "get_zone_info": false, 00:04:44.363 "zone_management": false, 00:04:44.363 "zone_append": false, 00:04:44.363 "compare": false, 00:04:44.363 "compare_and_write": false, 00:04:44.363 "abort": true, 00:04:44.363 "seek_hole": false, 00:04:44.363 "seek_data": false, 00:04:44.363 "copy": true, 00:04:44.363 "nvme_iov_md": false 00:04:44.363 }, 00:04:44.363 "memory_domains": [ 00:04:44.363 { 00:04:44.363 "dma_device_id": "system", 00:04:44.363 "dma_device_type": 1 00:04:44.363 }, 00:04:44.363 { 00:04:44.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.363 "dma_device_type": 2 00:04:44.363 } 00:04:44.363 ], 00:04:44.363 "driver_specific": {} 00:04:44.363 } 00:04:44.363 ]' 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.363 [2024-11-15 10:48:31.127925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:44.363 [2024-11-15 10:48:31.127998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:44.363 [2024-11-15 10:48:31.128041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:44.363 [2024-11-15 10:48:31.128059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:44.363 [2024-11-15 10:48:31.130749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:44.363 [2024-11-15 10:48:31.130914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:44.363 Passthru0 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.363 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.363 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:44.363 { 00:04:44.363 "name": "Malloc0", 00:04:44.363 "aliases": [ 00:04:44.363 "0baba221-151e-4161-ad40-5f7df755137d" 00:04:44.363 ], 00:04:44.363 "product_name": "Malloc disk", 00:04:44.363 "block_size": 512, 00:04:44.363 "num_blocks": 16384, 00:04:44.363 "uuid": "0baba221-151e-4161-ad40-5f7df755137d", 00:04:44.363 "assigned_rate_limits": { 00:04:44.363 "rw_ios_per_sec": 0, 00:04:44.363 "rw_mbytes_per_sec": 0, 00:04:44.363 "r_mbytes_per_sec": 0, 00:04:44.363 "w_mbytes_per_sec": 0 00:04:44.363 }, 00:04:44.363 "claimed": true, 00:04:44.363 "claim_type": "exclusive_write", 00:04:44.363 "zoned": false, 00:04:44.363 "supported_io_types": { 00:04:44.363 "read": true, 00:04:44.363 "write": true, 00:04:44.363 "unmap": true, 00:04:44.363 "flush": true, 00:04:44.363 "reset": true, 00:04:44.363 "nvme_admin": false, 00:04:44.363 "nvme_io": false, 00:04:44.363 "nvme_io_md": false, 00:04:44.363 "write_zeroes": true, 00:04:44.363 "zcopy": true, 00:04:44.363 "get_zone_info": false, 00:04:44.363 "zone_management": false, 00:04:44.363 "zone_append": false, 00:04:44.363 "compare": false, 00:04:44.363 "compare_and_write": false, 00:04:44.363 "abort": true, 00:04:44.363 "seek_hole": false, 00:04:44.363 "seek_data": false, 00:04:44.363 "copy": true, 00:04:44.363 "nvme_iov_md": false 00:04:44.363 }, 00:04:44.363 "memory_domains": [ 00:04:44.363 { 00:04:44.363 "dma_device_id": "system", 00:04:44.363 "dma_device_type": 1 00:04:44.363 }, 00:04:44.363 { 00:04:44.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.363 "dma_device_type": 2 00:04:44.363 } 00:04:44.363 ], 00:04:44.363 "driver_specific": {} 00:04:44.363 }, 00:04:44.363 { 00:04:44.363 "name": "Passthru0", 00:04:44.363 "aliases": [ 00:04:44.363 "9fc9d003-55be-55ad-97a2-d39fe4443194" 00:04:44.363 ], 00:04:44.363 "product_name": "passthru", 00:04:44.363 "block_size": 512, 00:04:44.363 "num_blocks": 16384, 00:04:44.363 "uuid": "9fc9d003-55be-55ad-97a2-d39fe4443194", 00:04:44.363 "assigned_rate_limits": { 00:04:44.363 "rw_ios_per_sec": 0, 00:04:44.363 "rw_mbytes_per_sec": 0, 00:04:44.363 "r_mbytes_per_sec": 0, 00:04:44.363 "w_mbytes_per_sec": 0 00:04:44.363 }, 00:04:44.363 "claimed": false, 00:04:44.363 "zoned": false, 00:04:44.363 "supported_io_types": { 00:04:44.364 "read": true, 00:04:44.364 "write": true, 00:04:44.364 "unmap": true, 00:04:44.364 "flush": true, 00:04:44.364 "reset": true, 00:04:44.364 "nvme_admin": false, 00:04:44.364 "nvme_io": false, 00:04:44.364 "nvme_io_md": false, 00:04:44.364 "write_zeroes": true, 00:04:44.364 "zcopy": true, 00:04:44.364 "get_zone_info": false, 00:04:44.364 "zone_management": false, 00:04:44.364 "zone_append": false, 00:04:44.364 "compare": false, 00:04:44.364 "compare_and_write": false, 00:04:44.364 "abort": true, 00:04:44.364 "seek_hole": false, 00:04:44.364 "seek_data": false, 00:04:44.364 "copy": true, 00:04:44.364 "nvme_iov_md": false 00:04:44.364 }, 00:04:44.364 "memory_domains": [ 00:04:44.364 { 00:04:44.364 "dma_device_id": "system", 00:04:44.364 "dma_device_type": 1 00:04:44.364 }, 00:04:44.364 { 00:04:44.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.364 "dma_device_type": 2 00:04:44.364 } 00:04:44.364 ], 00:04:44.364 "driver_specific": { 00:04:44.364 "passthru": { 00:04:44.364 "name": "Passthru0", 00:04:44.364 "base_bdev_name": "Malloc0" 00:04:44.364 } 00:04:44.364 } 00:04:44.364 } 00:04:44.364 ]' 00:04:44.364 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:44.364 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:44.364 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:44.364 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.364 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.623 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.623 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.623 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:44.623 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:44.623 ************************************ 00:04:44.623 END TEST rpc_integrity 00:04:44.623 ************************************ 00:04:44.623 10:48:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:44.623 00:04:44.623 real 0m0.352s 00:04:44.623 user 0m0.186s 00:04:44.623 sys 0m0.063s 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.623 10:48:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 10:48:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:44.623 10:48:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.623 10:48:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.623 10:48:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 ************************************ 00:04:44.623 START TEST rpc_plugins 00:04:44.623 ************************************ 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:44.623 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.623 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:44.623 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.623 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.623 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:44.623 { 00:04:44.623 "name": "Malloc1", 00:04:44.623 "aliases": [ 00:04:44.623 "81159e74-4375-45db-8d70-387d7d205030" 00:04:44.623 ], 00:04:44.623 "product_name": "Malloc disk", 00:04:44.623 "block_size": 4096, 00:04:44.623 "num_blocks": 256, 00:04:44.623 "uuid": "81159e74-4375-45db-8d70-387d7d205030", 00:04:44.623 "assigned_rate_limits": { 00:04:44.623 "rw_ios_per_sec": 0, 00:04:44.623 "rw_mbytes_per_sec": 0, 00:04:44.623 "r_mbytes_per_sec": 0, 00:04:44.623 "w_mbytes_per_sec": 0 00:04:44.623 }, 00:04:44.623 "claimed": false, 00:04:44.623 "zoned": false, 00:04:44.623 "supported_io_types": { 00:04:44.623 "read": true, 00:04:44.623 "write": true, 00:04:44.623 "unmap": true, 00:04:44.623 "flush": true, 00:04:44.623 "reset": true, 00:04:44.623 "nvme_admin": false, 00:04:44.623 "nvme_io": false, 00:04:44.623 "nvme_io_md": false, 00:04:44.623 "write_zeroes": true, 00:04:44.623 "zcopy": true, 00:04:44.623 "get_zone_info": false, 00:04:44.623 "zone_management": false, 00:04:44.623 "zone_append": false, 00:04:44.623 "compare": false, 00:04:44.623 "compare_and_write": false, 00:04:44.623 "abort": true, 00:04:44.623 "seek_hole": false, 00:04:44.623 "seek_data": false, 00:04:44.623 "copy": true, 00:04:44.623 "nvme_iov_md": false 00:04:44.623 }, 00:04:44.623 "memory_domains": [ 00:04:44.623 { 00:04:44.623 "dma_device_id": "system", 00:04:44.623 "dma_device_type": 1 00:04:44.623 }, 00:04:44.623 { 00:04:44.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:44.624 "dma_device_type": 2 00:04:44.624 } 00:04:44.624 ], 00:04:44.624 "driver_specific": {} 00:04:44.624 } 00:04:44.624 ]' 00:04:44.624 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:44.624 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:44.624 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:44.624 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.624 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.883 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.884 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:44.884 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.884 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.884 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:44.884 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:44.884 ************************************ 00:04:44.884 END TEST rpc_plugins 00:04:44.884 ************************************ 00:04:44.884 10:48:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:44.884 00:04:44.884 real 0m0.161s 00:04:44.884 user 0m0.086s 00:04:44.884 sys 0m0.028s 00:04:44.884 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:44.884 10:48:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 10:48:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:44.884 10:48:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:44.884 10:48:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:44.884 10:48:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 ************************************ 00:04:44.884 START TEST rpc_trace_cmd_test 00:04:44.884 ************************************ 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:44.884 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57836", 00:04:44.884 "tpoint_group_mask": "0x8", 00:04:44.884 "iscsi_conn": { 00:04:44.884 "mask": "0x2", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "scsi": { 00:04:44.884 "mask": "0x4", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "bdev": { 00:04:44.884 "mask": "0x8", 00:04:44.884 "tpoint_mask": "0xffffffffffffffff" 00:04:44.884 }, 00:04:44.884 "nvmf_rdma": { 00:04:44.884 "mask": "0x10", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "nvmf_tcp": { 00:04:44.884 "mask": "0x20", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "ftl": { 00:04:44.884 "mask": "0x40", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "blobfs": { 00:04:44.884 "mask": "0x80", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "dsa": { 00:04:44.884 "mask": "0x200", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "thread": { 00:04:44.884 "mask": "0x400", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "nvme_pcie": { 00:04:44.884 "mask": "0x800", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "iaa": { 00:04:44.884 "mask": "0x1000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "nvme_tcp": { 00:04:44.884 "mask": "0x2000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "bdev_nvme": { 00:04:44.884 "mask": "0x4000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "sock": { 00:04:44.884 "mask": "0x8000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "blob": { 00:04:44.884 "mask": "0x10000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "bdev_raid": { 00:04:44.884 "mask": "0x20000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 }, 00:04:44.884 "scheduler": { 00:04:44.884 "mask": "0x40000", 00:04:44.884 "tpoint_mask": "0x0" 00:04:44.884 } 00:04:44.884 }' 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:44.884 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:45.205 00:04:45.205 real 0m0.227s 00:04:45.205 user 0m0.180s 00:04:45.205 sys 0m0.040s 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.205 10:48:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:45.205 ************************************ 00:04:45.205 END TEST rpc_trace_cmd_test 00:04:45.205 ************************************ 00:04:45.205 10:48:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:45.205 10:48:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:45.205 10:48:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:45.205 10:48:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.205 10:48:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.205 10:48:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.205 ************************************ 00:04:45.205 START TEST rpc_daemon_integrity 00:04:45.205 ************************************ 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.205 10:48:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.205 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.205 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.205 { 00:04:45.205 "name": "Malloc2", 00:04:45.205 "aliases": [ 00:04:45.205 "4fbab5be-d252-4a8d-b009-dee724dc1c26" 00:04:45.205 ], 00:04:45.205 "product_name": "Malloc disk", 00:04:45.205 "block_size": 512, 00:04:45.205 "num_blocks": 16384, 00:04:45.205 "uuid": "4fbab5be-d252-4a8d-b009-dee724dc1c26", 00:04:45.205 "assigned_rate_limits": { 00:04:45.205 "rw_ios_per_sec": 0, 00:04:45.205 "rw_mbytes_per_sec": 0, 00:04:45.205 "r_mbytes_per_sec": 0, 00:04:45.205 "w_mbytes_per_sec": 0 00:04:45.205 }, 00:04:45.205 "claimed": false, 00:04:45.205 "zoned": false, 00:04:45.205 "supported_io_types": { 00:04:45.205 "read": true, 00:04:45.205 "write": true, 00:04:45.205 "unmap": true, 00:04:45.205 "flush": true, 00:04:45.205 "reset": true, 00:04:45.205 "nvme_admin": false, 00:04:45.205 "nvme_io": false, 00:04:45.205 "nvme_io_md": false, 00:04:45.205 "write_zeroes": true, 00:04:45.205 "zcopy": true, 00:04:45.205 "get_zone_info": false, 00:04:45.205 "zone_management": false, 00:04:45.205 "zone_append": false, 00:04:45.205 "compare": false, 00:04:45.205 "compare_and_write": false, 00:04:45.205 "abort": true, 00:04:45.205 "seek_hole": false, 00:04:45.205 "seek_data": false, 00:04:45.205 "copy": true, 00:04:45.205 "nvme_iov_md": false 00:04:45.205 }, 00:04:45.205 "memory_domains": [ 00:04:45.205 { 00:04:45.205 "dma_device_id": "system", 00:04:45.205 "dma_device_type": 1 00:04:45.205 }, 00:04:45.205 { 00:04:45.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.205 "dma_device_type": 2 00:04:45.205 } 00:04:45.205 ], 00:04:45.205 "driver_specific": {} 00:04:45.205 } 00:04:45.205 ]' 00:04:45.205 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.491 [2024-11-15 10:48:32.073102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:45.491 [2024-11-15 10:48:32.073178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.491 [2024-11-15 10:48:32.073205] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:45.491 [2024-11-15 10:48:32.073219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.491 [2024-11-15 10:48:32.075838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.491 [2024-11-15 10:48:32.075880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.491 Passthru0 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.491 { 00:04:45.491 "name": "Malloc2", 00:04:45.491 "aliases": [ 00:04:45.491 "4fbab5be-d252-4a8d-b009-dee724dc1c26" 00:04:45.491 ], 00:04:45.491 "product_name": "Malloc disk", 00:04:45.491 "block_size": 512, 00:04:45.491 "num_blocks": 16384, 00:04:45.491 "uuid": "4fbab5be-d252-4a8d-b009-dee724dc1c26", 00:04:45.491 "assigned_rate_limits": { 00:04:45.491 "rw_ios_per_sec": 0, 00:04:45.491 "rw_mbytes_per_sec": 0, 00:04:45.491 "r_mbytes_per_sec": 0, 00:04:45.491 "w_mbytes_per_sec": 0 00:04:45.491 }, 00:04:45.491 "claimed": true, 00:04:45.491 "claim_type": "exclusive_write", 00:04:45.491 "zoned": false, 00:04:45.491 "supported_io_types": { 00:04:45.491 "read": true, 00:04:45.491 "write": true, 00:04:45.491 "unmap": true, 00:04:45.491 "flush": true, 00:04:45.491 "reset": true, 00:04:45.491 "nvme_admin": false, 00:04:45.491 "nvme_io": false, 00:04:45.491 "nvme_io_md": false, 00:04:45.491 "write_zeroes": true, 00:04:45.491 "zcopy": true, 00:04:45.491 "get_zone_info": false, 00:04:45.491 "zone_management": false, 00:04:45.491 "zone_append": false, 00:04:45.491 "compare": false, 00:04:45.491 "compare_and_write": false, 00:04:45.491 "abort": true, 00:04:45.491 "seek_hole": false, 00:04:45.491 "seek_data": false, 00:04:45.491 "copy": true, 00:04:45.491 "nvme_iov_md": false 00:04:45.491 }, 00:04:45.491 "memory_domains": [ 00:04:45.491 { 00:04:45.491 "dma_device_id": "system", 00:04:45.491 "dma_device_type": 1 00:04:45.491 }, 00:04:45.491 { 00:04:45.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.491 "dma_device_type": 2 00:04:45.491 } 00:04:45.491 ], 00:04:45.491 "driver_specific": {} 00:04:45.491 }, 00:04:45.491 { 00:04:45.491 "name": "Passthru0", 00:04:45.491 "aliases": [ 00:04:45.491 "0e6dbbf0-2631-57de-9a33-129716bf2b9a" 00:04:45.491 ], 00:04:45.491 "product_name": "passthru", 00:04:45.491 "block_size": 512, 00:04:45.491 "num_blocks": 16384, 00:04:45.491 "uuid": "0e6dbbf0-2631-57de-9a33-129716bf2b9a", 00:04:45.491 "assigned_rate_limits": { 00:04:45.491 "rw_ios_per_sec": 0, 00:04:45.491 "rw_mbytes_per_sec": 0, 00:04:45.491 "r_mbytes_per_sec": 0, 00:04:45.491 "w_mbytes_per_sec": 0 00:04:45.491 }, 00:04:45.491 "claimed": false, 00:04:45.491 "zoned": false, 00:04:45.491 "supported_io_types": { 00:04:45.491 "read": true, 00:04:45.491 "write": true, 00:04:45.491 "unmap": true, 00:04:45.491 "flush": true, 00:04:45.491 "reset": true, 00:04:45.491 "nvme_admin": false, 00:04:45.491 "nvme_io": false, 00:04:45.491 "nvme_io_md": false, 00:04:45.491 "write_zeroes": true, 00:04:45.491 "zcopy": true, 00:04:45.491 "get_zone_info": false, 00:04:45.491 "zone_management": false, 00:04:45.491 "zone_append": false, 00:04:45.491 "compare": false, 00:04:45.491 "compare_and_write": false, 00:04:45.491 "abort": true, 00:04:45.491 "seek_hole": false, 00:04:45.491 "seek_data": false, 00:04:45.491 "copy": true, 00:04:45.491 "nvme_iov_md": false 00:04:45.491 }, 00:04:45.491 "memory_domains": [ 00:04:45.491 { 00:04:45.491 "dma_device_id": "system", 00:04:45.491 "dma_device_type": 1 00:04:45.491 }, 00:04:45.491 { 00:04:45.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.491 "dma_device_type": 2 00:04:45.491 } 00:04:45.491 ], 00:04:45.491 "driver_specific": { 00:04:45.491 "passthru": { 00:04:45.491 "name": "Passthru0", 00:04:45.491 "base_bdev_name": "Malloc2" 00:04:45.491 } 00:04:45.491 } 00:04:45.491 } 00:04:45.491 ]' 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:45.491 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:45.492 ************************************ 00:04:45.492 END TEST rpc_daemon_integrity 00:04:45.492 ************************************ 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:45.492 00:04:45.492 real 0m0.344s 00:04:45.492 user 0m0.193s 00:04:45.492 sys 0m0.058s 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.492 10:48:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.492 10:48:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:45.492 10:48:32 rpc -- rpc/rpc.sh@84 -- # killprocess 57836 00:04:45.492 10:48:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 57836 ']' 00:04:45.492 10:48:32 rpc -- common/autotest_common.sh@958 -- # kill -0 57836 00:04:45.492 10:48:32 rpc -- common/autotest_common.sh@959 -- # uname 00:04:45.492 10:48:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.492 10:48:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57836 00:04:45.751 killing process with pid 57836 00:04:45.751 10:48:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.751 10:48:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.751 10:48:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57836' 00:04:45.751 10:48:32 rpc -- common/autotest_common.sh@973 -- # kill 57836 00:04:45.751 10:48:32 rpc -- common/autotest_common.sh@978 -- # wait 57836 00:04:48.289 00:04:48.289 real 0m5.331s 00:04:48.289 user 0m5.826s 00:04:48.289 sys 0m0.965s 00:04:48.289 10:48:34 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.289 ************************************ 00:04:48.289 END TEST rpc 00:04:48.289 10:48:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.289 ************************************ 00:04:48.289 10:48:34 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.289 10:48:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.289 10:48:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.289 10:48:34 -- common/autotest_common.sh@10 -- # set +x 00:04:48.289 ************************************ 00:04:48.289 START TEST skip_rpc 00:04:48.289 ************************************ 00:04:48.289 10:48:34 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.289 * Looking for test storage... 00:04:48.289 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.289 10:48:34 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:48.289 10:48:34 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:48.289 10:48:34 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:48.289 10:48:34 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.289 10:48:34 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.289 10:48:35 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.289 10:48:35 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.289 10:48:35 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.290 --rc genhtml_branch_coverage=1 00:04:48.290 --rc genhtml_function_coverage=1 00:04:48.290 --rc genhtml_legend=1 00:04:48.290 --rc geninfo_all_blocks=1 00:04:48.290 --rc geninfo_unexecuted_blocks=1 00:04:48.290 00:04:48.290 ' 00:04:48.290 10:48:35 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.290 --rc genhtml_branch_coverage=1 00:04:48.290 --rc genhtml_function_coverage=1 00:04:48.290 --rc genhtml_legend=1 00:04:48.290 --rc geninfo_all_blocks=1 00:04:48.290 --rc geninfo_unexecuted_blocks=1 00:04:48.290 00:04:48.290 ' 00:04:48.290 10:48:35 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.290 --rc genhtml_branch_coverage=1 00:04:48.290 --rc genhtml_function_coverage=1 00:04:48.290 --rc genhtml_legend=1 00:04:48.290 --rc geninfo_all_blocks=1 00:04:48.290 --rc geninfo_unexecuted_blocks=1 00:04:48.290 00:04:48.290 ' 00:04:48.290 10:48:35 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:48.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.290 --rc genhtml_branch_coverage=1 00:04:48.290 --rc genhtml_function_coverage=1 00:04:48.290 --rc genhtml_legend=1 00:04:48.290 --rc geninfo_all_blocks=1 00:04:48.290 --rc geninfo_unexecuted_blocks=1 00:04:48.290 00:04:48.290 ' 00:04:48.290 10:48:35 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.290 10:48:35 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.290 10:48:35 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.290 10:48:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.290 10:48:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.290 10:48:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.290 ************************************ 00:04:48.290 START TEST skip_rpc 00:04:48.290 ************************************ 00:04:48.290 10:48:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:48.290 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58065 00:04:48.290 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.290 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.290 10:48:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.290 [2024-11-15 10:48:35.145589] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:04:48.290 [2024-11-15 10:48:35.145721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58065 ] 00:04:48.548 [2024-11-15 10:48:35.326706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.807 [2024-11-15 10:48:35.442987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58065 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58065 ']' 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58065 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58065 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.141 killing process with pid 58065 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58065' 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58065 00:04:54.141 10:48:40 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58065 00:04:56.047 00:04:56.047 real 0m7.466s 00:04:56.047 user 0m6.971s 00:04:56.047 sys 0m0.419s 00:04:56.047 10:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.047 10:48:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.047 ************************************ 00:04:56.047 END TEST skip_rpc 00:04:56.047 ************************************ 00:04:56.047 10:48:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:56.047 10:48:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.047 10:48:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.047 10:48:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.047 ************************************ 00:04:56.047 START TEST skip_rpc_with_json 00:04:56.047 ************************************ 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58169 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58169 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58169 ']' 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.047 10:48:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:56.047 [2024-11-15 10:48:42.685498] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:04:56.047 [2024-11-15 10:48:42.685638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58169 ] 00:04:56.047 [2024-11-15 10:48:42.870683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.306 [2024-11-15 10:48:42.994589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.245 [2024-11-15 10:48:43.857723] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:57.245 request: 00:04:57.245 { 00:04:57.245 "trtype": "tcp", 00:04:57.245 "method": "nvmf_get_transports", 00:04:57.245 "req_id": 1 00:04:57.245 } 00:04:57.245 Got JSON-RPC error response 00:04:57.245 response: 00:04:57.245 { 00:04:57.245 "code": -19, 00:04:57.245 "message": "No such device" 00:04:57.245 } 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.245 [2024-11-15 10:48:43.869833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.245 10:48:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:57.245 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.245 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:57.245 { 00:04:57.245 "subsystems": [ 00:04:57.245 { 00:04:57.245 "subsystem": "fsdev", 00:04:57.245 "config": [ 00:04:57.245 { 00:04:57.245 "method": "fsdev_set_opts", 00:04:57.245 "params": { 00:04:57.245 "fsdev_io_pool_size": 65535, 00:04:57.245 "fsdev_io_cache_size": 256 00:04:57.245 } 00:04:57.245 } 00:04:57.245 ] 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "subsystem": "keyring", 00:04:57.245 "config": [] 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "subsystem": "iobuf", 00:04:57.245 "config": [ 00:04:57.245 { 00:04:57.245 "method": "iobuf_set_options", 00:04:57.245 "params": { 00:04:57.245 "small_pool_count": 8192, 00:04:57.245 "large_pool_count": 1024, 00:04:57.245 "small_bufsize": 8192, 00:04:57.245 "large_bufsize": 135168, 00:04:57.245 "enable_numa": false 00:04:57.245 } 00:04:57.245 } 00:04:57.245 ] 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "subsystem": "sock", 00:04:57.245 "config": [ 00:04:57.245 { 00:04:57.245 "method": "sock_set_default_impl", 00:04:57.245 "params": { 00:04:57.245 "impl_name": "posix" 00:04:57.245 } 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "method": "sock_impl_set_options", 00:04:57.245 "params": { 00:04:57.245 "impl_name": "ssl", 00:04:57.245 "recv_buf_size": 4096, 00:04:57.245 "send_buf_size": 4096, 00:04:57.245 "enable_recv_pipe": true, 00:04:57.245 "enable_quickack": false, 00:04:57.245 "enable_placement_id": 0, 00:04:57.245 "enable_zerocopy_send_server": true, 00:04:57.245 "enable_zerocopy_send_client": false, 00:04:57.245 "zerocopy_threshold": 0, 00:04:57.245 "tls_version": 0, 00:04:57.245 "enable_ktls": false 00:04:57.245 } 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "method": "sock_impl_set_options", 00:04:57.245 "params": { 00:04:57.245 "impl_name": "posix", 00:04:57.245 "recv_buf_size": 2097152, 00:04:57.245 "send_buf_size": 2097152, 00:04:57.245 "enable_recv_pipe": true, 00:04:57.245 "enable_quickack": false, 00:04:57.245 "enable_placement_id": 0, 00:04:57.245 "enable_zerocopy_send_server": true, 00:04:57.245 "enable_zerocopy_send_client": false, 00:04:57.245 "zerocopy_threshold": 0, 00:04:57.245 "tls_version": 0, 00:04:57.245 "enable_ktls": false 00:04:57.245 } 00:04:57.245 } 00:04:57.245 ] 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "subsystem": "vmd", 00:04:57.245 "config": [] 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "subsystem": "accel", 00:04:57.245 "config": [ 00:04:57.245 { 00:04:57.245 "method": "accel_set_options", 00:04:57.245 "params": { 00:04:57.245 "small_cache_size": 128, 00:04:57.245 "large_cache_size": 16, 00:04:57.245 "task_count": 2048, 00:04:57.245 "sequence_count": 2048, 00:04:57.245 "buf_count": 2048 00:04:57.245 } 00:04:57.245 } 00:04:57.245 ] 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "subsystem": "bdev", 00:04:57.245 "config": [ 00:04:57.245 { 00:04:57.245 "method": "bdev_set_options", 00:04:57.245 "params": { 00:04:57.245 "bdev_io_pool_size": 65535, 00:04:57.245 "bdev_io_cache_size": 256, 00:04:57.245 "bdev_auto_examine": true, 00:04:57.245 "iobuf_small_cache_size": 128, 00:04:57.245 "iobuf_large_cache_size": 16 00:04:57.245 } 00:04:57.245 }, 00:04:57.245 { 00:04:57.245 "method": "bdev_raid_set_options", 00:04:57.245 "params": { 00:04:57.245 "process_window_size_kb": 1024, 00:04:57.245 "process_max_bandwidth_mb_sec": 0 00:04:57.245 } 00:04:57.245 }, 00:04:57.245 { 00:04:57.246 "method": "bdev_iscsi_set_options", 00:04:57.246 "params": { 00:04:57.246 "timeout_sec": 30 00:04:57.246 } 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "method": "bdev_nvme_set_options", 00:04:57.246 "params": { 00:04:57.246 "action_on_timeout": "none", 00:04:57.246 "timeout_us": 0, 00:04:57.246 "timeout_admin_us": 0, 00:04:57.246 "keep_alive_timeout_ms": 10000, 00:04:57.246 "arbitration_burst": 0, 00:04:57.246 "low_priority_weight": 0, 00:04:57.246 "medium_priority_weight": 0, 00:04:57.246 "high_priority_weight": 0, 00:04:57.246 "nvme_adminq_poll_period_us": 10000, 00:04:57.246 "nvme_ioq_poll_period_us": 0, 00:04:57.246 "io_queue_requests": 0, 00:04:57.246 "delay_cmd_submit": true, 00:04:57.246 "transport_retry_count": 4, 00:04:57.246 "bdev_retry_count": 3, 00:04:57.246 "transport_ack_timeout": 0, 00:04:57.246 "ctrlr_loss_timeout_sec": 0, 00:04:57.246 "reconnect_delay_sec": 0, 00:04:57.246 "fast_io_fail_timeout_sec": 0, 00:04:57.246 "disable_auto_failback": false, 00:04:57.246 "generate_uuids": false, 00:04:57.246 "transport_tos": 0, 00:04:57.246 "nvme_error_stat": false, 00:04:57.246 "rdma_srq_size": 0, 00:04:57.246 "io_path_stat": false, 00:04:57.246 "allow_accel_sequence": false, 00:04:57.246 "rdma_max_cq_size": 0, 00:04:57.246 "rdma_cm_event_timeout_ms": 0, 00:04:57.246 "dhchap_digests": [ 00:04:57.246 "sha256", 00:04:57.246 "sha384", 00:04:57.246 "sha512" 00:04:57.246 ], 00:04:57.246 "dhchap_dhgroups": [ 00:04:57.246 "null", 00:04:57.246 "ffdhe2048", 00:04:57.246 "ffdhe3072", 00:04:57.246 "ffdhe4096", 00:04:57.246 "ffdhe6144", 00:04:57.246 "ffdhe8192" 00:04:57.246 ] 00:04:57.246 } 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "method": "bdev_nvme_set_hotplug", 00:04:57.246 "params": { 00:04:57.246 "period_us": 100000, 00:04:57.246 "enable": false 00:04:57.246 } 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "method": "bdev_wait_for_examine" 00:04:57.246 } 00:04:57.246 ] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "scsi", 00:04:57.246 "config": null 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "scheduler", 00:04:57.246 "config": [ 00:04:57.246 { 00:04:57.246 "method": "framework_set_scheduler", 00:04:57.246 "params": { 00:04:57.246 "name": "static" 00:04:57.246 } 00:04:57.246 } 00:04:57.246 ] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "vhost_scsi", 00:04:57.246 "config": [] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "vhost_blk", 00:04:57.246 "config": [] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "ublk", 00:04:57.246 "config": [] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "nbd", 00:04:57.246 "config": [] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "nvmf", 00:04:57.246 "config": [ 00:04:57.246 { 00:04:57.246 "method": "nvmf_set_config", 00:04:57.246 "params": { 00:04:57.246 "discovery_filter": "match_any", 00:04:57.246 "admin_cmd_passthru": { 00:04:57.246 "identify_ctrlr": false 00:04:57.246 }, 00:04:57.246 "dhchap_digests": [ 00:04:57.246 "sha256", 00:04:57.246 "sha384", 00:04:57.246 "sha512" 00:04:57.246 ], 00:04:57.246 "dhchap_dhgroups": [ 00:04:57.246 "null", 00:04:57.246 "ffdhe2048", 00:04:57.246 "ffdhe3072", 00:04:57.246 "ffdhe4096", 00:04:57.246 "ffdhe6144", 00:04:57.246 "ffdhe8192" 00:04:57.246 ] 00:04:57.246 } 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "method": "nvmf_set_max_subsystems", 00:04:57.246 "params": { 00:04:57.246 "max_subsystems": 1024 00:04:57.246 } 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "method": "nvmf_set_crdt", 00:04:57.246 "params": { 00:04:57.246 "crdt1": 0, 00:04:57.246 "crdt2": 0, 00:04:57.246 "crdt3": 0 00:04:57.246 } 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "method": "nvmf_create_transport", 00:04:57.246 "params": { 00:04:57.246 "trtype": "TCP", 00:04:57.246 "max_queue_depth": 128, 00:04:57.246 "max_io_qpairs_per_ctrlr": 127, 00:04:57.246 "in_capsule_data_size": 4096, 00:04:57.246 "max_io_size": 131072, 00:04:57.246 "io_unit_size": 131072, 00:04:57.246 "max_aq_depth": 128, 00:04:57.246 "num_shared_buffers": 511, 00:04:57.246 "buf_cache_size": 4294967295, 00:04:57.246 "dif_insert_or_strip": false, 00:04:57.246 "zcopy": false, 00:04:57.246 "c2h_success": true, 00:04:57.246 "sock_priority": 0, 00:04:57.246 "abort_timeout_sec": 1, 00:04:57.246 "ack_timeout": 0, 00:04:57.246 "data_wr_pool_size": 0 00:04:57.246 } 00:04:57.246 } 00:04:57.246 ] 00:04:57.246 }, 00:04:57.246 { 00:04:57.246 "subsystem": "iscsi", 00:04:57.246 "config": [ 00:04:57.246 { 00:04:57.246 "method": "iscsi_set_options", 00:04:57.246 "params": { 00:04:57.246 "node_base": "iqn.2016-06.io.spdk", 00:04:57.246 "max_sessions": 128, 00:04:57.246 "max_connections_per_session": 2, 00:04:57.246 "max_queue_depth": 64, 00:04:57.246 "default_time2wait": 2, 00:04:57.246 "default_time2retain": 20, 00:04:57.246 "first_burst_length": 8192, 00:04:57.246 "immediate_data": true, 00:04:57.246 "allow_duplicated_isid": false, 00:04:57.246 "error_recovery_level": 0, 00:04:57.246 "nop_timeout": 60, 00:04:57.246 "nop_in_interval": 30, 00:04:57.246 "disable_chap": false, 00:04:57.246 "require_chap": false, 00:04:57.246 "mutual_chap": false, 00:04:57.246 "chap_group": 0, 00:04:57.246 "max_large_datain_per_connection": 64, 00:04:57.246 "max_r2t_per_connection": 4, 00:04:57.246 "pdu_pool_size": 36864, 00:04:57.246 "immediate_data_pool_size": 16384, 00:04:57.246 "data_out_pool_size": 2048 00:04:57.246 } 00:04:57.246 } 00:04:57.246 ] 00:04:57.246 } 00:04:57.246 ] 00:04:57.246 } 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58169 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58169 ']' 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58169 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.246 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58169 00:04:57.506 killing process with pid 58169 00:04:57.506 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.506 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.506 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58169' 00:04:57.506 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58169 00:04:57.506 10:48:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58169 00:05:00.045 10:48:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58225 00:05:00.045 10:48:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.045 10:48:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58225 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58225 ']' 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58225 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58225 00:05:05.322 killing process with pid 58225 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58225' 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58225 00:05:05.322 10:48:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58225 00:05:07.228 10:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:07.228 10:48:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:07.228 00:05:07.228 real 0m11.413s 00:05:07.228 user 0m10.792s 00:05:07.228 sys 0m0.953s 00:05:07.228 ************************************ 00:05:07.228 END TEST skip_rpc_with_json 00:05:07.228 ************************************ 00:05:07.228 10:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.228 10:48:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.228 10:48:54 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:07.228 10:48:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.228 10:48:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.228 10:48:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.228 ************************************ 00:05:07.228 START TEST skip_rpc_with_delay 00:05:07.228 ************************************ 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:07.228 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:07.488 [2024-11-15 10:48:54.177100] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:07.488 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:07.488 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.488 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.488 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.488 00:05:07.488 real 0m0.183s 00:05:07.488 user 0m0.089s 00:05:07.488 sys 0m0.093s 00:05:07.488 ************************************ 00:05:07.488 END TEST skip_rpc_with_delay 00:05:07.488 ************************************ 00:05:07.488 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.488 10:48:54 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:07.488 10:48:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:07.488 10:48:54 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:07.488 10:48:54 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:07.488 10:48:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.488 10:48:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.488 10:48:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.488 ************************************ 00:05:07.488 START TEST exit_on_failed_rpc_init 00:05:07.488 ************************************ 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:07.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58359 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58359 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58359 ']' 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.488 10:48:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.747 [2024-11-15 10:48:54.445098] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:07.747 [2024-11-15 10:48:54.445222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58359 ] 00:05:08.005 [2024-11-15 10:48:54.624283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.005 [2024-11-15 10:48:54.740767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.980 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.980 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:08.981 10:48:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:08.981 [2024-11-15 10:48:55.733954] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:08.981 [2024-11-15 10:48:55.734276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58382 ] 00:05:09.240 [2024-11-15 10:48:55.920034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.240 [2024-11-15 10:48:56.034813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.240 [2024-11-15 10:48:56.034923] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:09.240 [2024-11-15 10:48:56.034940] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:09.240 [2024-11-15 10:48:56.034961] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58359 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58359 ']' 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58359 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58359 00:05:09.499 killing process with pid 58359 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58359' 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58359 00:05:09.499 10:48:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58359 00:05:12.036 00:05:12.036 real 0m4.426s 00:05:12.036 user 0m4.810s 00:05:12.036 sys 0m0.637s 00:05:12.036 ************************************ 00:05:12.036 END TEST exit_on_failed_rpc_init 00:05:12.036 ************************************ 00:05:12.036 10:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.036 10:48:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.036 10:48:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.036 00:05:12.036 real 0m23.993s 00:05:12.036 user 0m22.860s 00:05:12.036 sys 0m2.413s 00:05:12.036 10:48:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.036 10:48:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.036 ************************************ 00:05:12.036 END TEST skip_rpc 00:05:12.036 ************************************ 00:05:12.036 10:48:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:12.036 10:48:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.036 10:48:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.036 10:48:58 -- common/autotest_common.sh@10 -- # set +x 00:05:12.036 ************************************ 00:05:12.036 START TEST rpc_client 00:05:12.036 ************************************ 00:05:12.036 10:48:58 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:12.296 * Looking for test storage... 00:05:12.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:12.296 10:48:59 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.296 10:48:59 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.297 10:48:59 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.297 --rc genhtml_branch_coverage=1 00:05:12.297 --rc genhtml_function_coverage=1 00:05:12.297 --rc genhtml_legend=1 00:05:12.297 --rc geninfo_all_blocks=1 00:05:12.297 --rc geninfo_unexecuted_blocks=1 00:05:12.297 00:05:12.297 ' 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.297 --rc genhtml_branch_coverage=1 00:05:12.297 --rc genhtml_function_coverage=1 00:05:12.297 --rc genhtml_legend=1 00:05:12.297 --rc geninfo_all_blocks=1 00:05:12.297 --rc geninfo_unexecuted_blocks=1 00:05:12.297 00:05:12.297 ' 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.297 --rc genhtml_branch_coverage=1 00:05:12.297 --rc genhtml_function_coverage=1 00:05:12.297 --rc genhtml_legend=1 00:05:12.297 --rc geninfo_all_blocks=1 00:05:12.297 --rc geninfo_unexecuted_blocks=1 00:05:12.297 00:05:12.297 ' 00:05:12.297 10:48:59 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.297 --rc genhtml_branch_coverage=1 00:05:12.297 --rc genhtml_function_coverage=1 00:05:12.297 --rc genhtml_legend=1 00:05:12.297 --rc geninfo_all_blocks=1 00:05:12.297 --rc geninfo_unexecuted_blocks=1 00:05:12.297 00:05:12.297 ' 00:05:12.297 10:48:59 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:12.297 OK 00:05:12.557 10:48:59 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:12.557 00:05:12.557 real 0m0.305s 00:05:12.557 user 0m0.168s 00:05:12.557 sys 0m0.153s 00:05:12.557 10:48:59 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.557 10:48:59 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:12.557 ************************************ 00:05:12.557 END TEST rpc_client 00:05:12.557 ************************************ 00:05:12.557 10:48:59 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:12.557 10:48:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.557 10:48:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.557 10:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.557 ************************************ 00:05:12.557 START TEST json_config 00:05:12.557 ************************************ 00:05:12.557 10:48:59 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:12.557 10:48:59 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.557 10:48:59 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:12.557 10:48:59 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.557 10:48:59 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:12.557 10:48:59 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.557 10:48:59 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.557 10:48:59 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.557 10:48:59 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.557 10:48:59 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.557 10:48:59 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.557 10:48:59 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.557 10:48:59 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.557 10:48:59 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.557 10:48:59 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.557 10:48:59 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.557 10:48:59 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:12.557 10:48:59 json_config -- scripts/common.sh@345 -- # : 1 00:05:12.557 10:48:59 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.557 10:48:59 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.817 10:48:59 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:12.817 10:48:59 json_config -- scripts/common.sh@353 -- # local d=1 00:05:12.817 10:48:59 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.817 10:48:59 json_config -- scripts/common.sh@355 -- # echo 1 00:05:12.817 10:48:59 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.817 10:48:59 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:12.817 10:48:59 json_config -- scripts/common.sh@353 -- # local d=2 00:05:12.817 10:48:59 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.817 10:48:59 json_config -- scripts/common.sh@355 -- # echo 2 00:05:12.817 10:48:59 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.817 10:48:59 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.817 10:48:59 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.817 10:48:59 json_config -- scripts/common.sh@368 -- # return 0 00:05:12.817 10:48:59 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.817 10:48:59 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.817 --rc genhtml_branch_coverage=1 00:05:12.817 --rc genhtml_function_coverage=1 00:05:12.817 --rc genhtml_legend=1 00:05:12.817 --rc geninfo_all_blocks=1 00:05:12.817 --rc geninfo_unexecuted_blocks=1 00:05:12.817 00:05:12.817 ' 00:05:12.817 10:48:59 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.817 --rc genhtml_branch_coverage=1 00:05:12.817 --rc genhtml_function_coverage=1 00:05:12.817 --rc genhtml_legend=1 00:05:12.817 --rc geninfo_all_blocks=1 00:05:12.817 --rc geninfo_unexecuted_blocks=1 00:05:12.817 00:05:12.817 ' 00:05:12.817 10:48:59 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.817 --rc genhtml_branch_coverage=1 00:05:12.817 --rc genhtml_function_coverage=1 00:05:12.817 --rc genhtml_legend=1 00:05:12.817 --rc geninfo_all_blocks=1 00:05:12.817 --rc geninfo_unexecuted_blocks=1 00:05:12.817 00:05:12.817 ' 00:05:12.817 10:48:59 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:12.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.817 --rc genhtml_branch_coverage=1 00:05:12.817 --rc genhtml_function_coverage=1 00:05:12.817 --rc genhtml_legend=1 00:05:12.817 --rc geninfo_all_blocks=1 00:05:12.817 --rc geninfo_unexecuted_blocks=1 00:05:12.817 00:05:12.817 ' 00:05:12.817 10:48:59 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1c0aaf4c-a905-4d21-869d-96349a84a203 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1c0aaf4c-a905-4d21-869d-96349a84a203 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:12.817 10:48:59 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:12.817 10:48:59 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.817 10:48:59 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.817 10:48:59 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.817 10:48:59 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.817 10:48:59 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.817 10:48:59 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.817 10:48:59 json_config -- paths/export.sh@5 -- # export PATH 00:05:12.817 10:48:59 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@51 -- # : 0 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:12.817 10:48:59 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:12.818 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:12.818 10:48:59 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:12.818 WARNING: No tests are enabled so not running JSON configuration tests 00:05:12.818 10:48:59 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:12.818 00:05:12.818 real 0m0.230s 00:05:12.818 user 0m0.131s 00:05:12.818 sys 0m0.099s 00:05:12.818 10:48:59 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.818 10:48:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.818 ************************************ 00:05:12.818 END TEST json_config 00:05:12.818 ************************************ 00:05:12.818 10:48:59 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.818 10:48:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.818 10:48:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.818 10:48:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.818 ************************************ 00:05:12.818 START TEST json_config_extra_key 00:05:12.818 ************************************ 00:05:12.818 10:48:59 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:12.818 10:48:59 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:12.818 10:48:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:12.818 10:48:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.077 10:48:59 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.077 10:48:59 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.077 10:48:59 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:13.078 10:48:59 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.078 10:48:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.078 --rc genhtml_branch_coverage=1 00:05:13.078 --rc genhtml_function_coverage=1 00:05:13.078 --rc genhtml_legend=1 00:05:13.078 --rc geninfo_all_blocks=1 00:05:13.078 --rc geninfo_unexecuted_blocks=1 00:05:13.078 00:05:13.078 ' 00:05:13.078 10:48:59 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.078 --rc genhtml_branch_coverage=1 00:05:13.078 --rc genhtml_function_coverage=1 00:05:13.078 --rc genhtml_legend=1 00:05:13.078 --rc geninfo_all_blocks=1 00:05:13.078 --rc geninfo_unexecuted_blocks=1 00:05:13.078 00:05:13.078 ' 00:05:13.078 10:48:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.078 --rc genhtml_branch_coverage=1 00:05:13.078 --rc genhtml_function_coverage=1 00:05:13.078 --rc genhtml_legend=1 00:05:13.078 --rc geninfo_all_blocks=1 00:05:13.078 --rc geninfo_unexecuted_blocks=1 00:05:13.078 00:05:13.078 ' 00:05:13.078 10:48:59 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.078 --rc genhtml_branch_coverage=1 00:05:13.078 --rc genhtml_function_coverage=1 00:05:13.078 --rc genhtml_legend=1 00:05:13.078 --rc geninfo_all_blocks=1 00:05:13.078 --rc geninfo_unexecuted_blocks=1 00:05:13.078 00:05:13.078 ' 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1c0aaf4c-a905-4d21-869d-96349a84a203 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1c0aaf4c-a905-4d21-869d-96349a84a203 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:13.078 10:48:59 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:13.078 10:48:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.078 10:48:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.078 10:48:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.078 10:48:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:13.078 10:48:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:13.078 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:13.078 10:48:59 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:13.078 INFO: launching applications... 00:05:13.078 10:48:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58592 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:13.078 Waiting for target to run... 00:05:13.078 10:48:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58592 /var/tmp/spdk_tgt.sock 00:05:13.078 10:48:59 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58592 ']' 00:05:13.079 10:48:59 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:13.079 10:48:59 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.079 10:48:59 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:13.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:13.079 10:48:59 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.079 10:48:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.079 10:48:59 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:13.079 [2024-11-15 10:48:59.883368] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:13.079 [2024-11-15 10:48:59.883496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58592 ] 00:05:13.647 [2024-11-15 10:49:00.286056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.647 [2024-11-15 10:49:00.392502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.585 00:05:14.585 INFO: shutting down applications... 00:05:14.585 10:49:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.585 10:49:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:14.585 10:49:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:14.585 10:49:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58592 ]] 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58592 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:14.585 10:49:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:14.845 10:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:14.845 10:49:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:14.845 10:49:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:14.845 10:49:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.415 10:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.415 10:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.415 10:49:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:15.415 10:49:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.985 10:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.985 10:49:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.985 10:49:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:15.985 10:49:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:16.555 10:49:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:16.555 10:49:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:16.555 10:49:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:16.555 10:49:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.124 10:49:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.124 10:49:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.124 10:49:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:17.124 10:49:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58592 00:05:17.384 SPDK target shutdown done 00:05:17.384 Success 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:17.384 10:49:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:17.384 10:49:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:17.384 00:05:17.384 real 0m4.637s 00:05:17.384 user 0m4.067s 00:05:17.384 sys 0m0.620s 00:05:17.384 10:49:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.384 10:49:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:17.384 ************************************ 00:05:17.384 END TEST json_config_extra_key 00:05:17.384 ************************************ 00:05:17.644 10:49:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.644 10:49:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.644 10:49:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.644 10:49:04 -- common/autotest_common.sh@10 -- # set +x 00:05:17.644 ************************************ 00:05:17.644 START TEST alias_rpc 00:05:17.644 ************************************ 00:05:17.644 10:49:04 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:17.644 * Looking for test storage... 00:05:17.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:17.644 10:49:04 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.644 10:49:04 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.644 10:49:04 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.644 10:49:04 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.644 10:49:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:17.644 10:49:04 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.645 --rc genhtml_branch_coverage=1 00:05:17.645 --rc genhtml_function_coverage=1 00:05:17.645 --rc genhtml_legend=1 00:05:17.645 --rc geninfo_all_blocks=1 00:05:17.645 --rc geninfo_unexecuted_blocks=1 00:05:17.645 00:05:17.645 ' 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.645 --rc genhtml_branch_coverage=1 00:05:17.645 --rc genhtml_function_coverage=1 00:05:17.645 --rc genhtml_legend=1 00:05:17.645 --rc geninfo_all_blocks=1 00:05:17.645 --rc geninfo_unexecuted_blocks=1 00:05:17.645 00:05:17.645 ' 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.645 --rc genhtml_branch_coverage=1 00:05:17.645 --rc genhtml_function_coverage=1 00:05:17.645 --rc genhtml_legend=1 00:05:17.645 --rc geninfo_all_blocks=1 00:05:17.645 --rc geninfo_unexecuted_blocks=1 00:05:17.645 00:05:17.645 ' 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.645 --rc genhtml_branch_coverage=1 00:05:17.645 --rc genhtml_function_coverage=1 00:05:17.645 --rc genhtml_legend=1 00:05:17.645 --rc geninfo_all_blocks=1 00:05:17.645 --rc geninfo_unexecuted_blocks=1 00:05:17.645 00:05:17.645 ' 00:05:17.645 10:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:17.645 10:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:17.645 10:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58704 00:05:17.645 10:49:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58704 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58704 ']' 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.645 10:49:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.905 [2024-11-15 10:49:04.591482] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:17.905 [2024-11-15 10:49:04.591795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58704 ] 00:05:18.164 [2024-11-15 10:49:04.770686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.164 [2024-11-15 10:49:04.886508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.102 10:49:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.102 10:49:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:19.102 10:49:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:19.361 10:49:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58704 00:05:19.361 10:49:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58704 ']' 00:05:19.361 10:49:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58704 00:05:19.361 10:49:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:19.361 10:49:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.361 10:49:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58704 00:05:19.361 killing process with pid 58704 00:05:19.361 10:49:06 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.361 10:49:06 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.361 10:49:06 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58704' 00:05:19.361 10:49:06 alias_rpc -- common/autotest_common.sh@973 -- # kill 58704 00:05:19.361 10:49:06 alias_rpc -- common/autotest_common.sh@978 -- # wait 58704 00:05:21.902 ************************************ 00:05:21.902 END TEST alias_rpc 00:05:21.902 ************************************ 00:05:21.902 00:05:21.902 real 0m4.186s 00:05:21.902 user 0m4.182s 00:05:21.902 sys 0m0.608s 00:05:21.902 10:49:08 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.902 10:49:08 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.902 10:49:08 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:21.902 10:49:08 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.902 10:49:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.902 10:49:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.902 10:49:08 -- common/autotest_common.sh@10 -- # set +x 00:05:21.902 ************************************ 00:05:21.902 START TEST spdkcli_tcp 00:05:21.902 ************************************ 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:21.902 * Looking for test storage... 00:05:21.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.902 10:49:08 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.902 --rc genhtml_branch_coverage=1 00:05:21.902 --rc genhtml_function_coverage=1 00:05:21.902 --rc genhtml_legend=1 00:05:21.902 --rc geninfo_all_blocks=1 00:05:21.902 --rc geninfo_unexecuted_blocks=1 00:05:21.902 00:05:21.902 ' 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.902 --rc genhtml_branch_coverage=1 00:05:21.902 --rc genhtml_function_coverage=1 00:05:21.902 --rc genhtml_legend=1 00:05:21.902 --rc geninfo_all_blocks=1 00:05:21.902 --rc geninfo_unexecuted_blocks=1 00:05:21.902 00:05:21.902 ' 00:05:21.902 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.902 --rc genhtml_branch_coverage=1 00:05:21.902 --rc genhtml_function_coverage=1 00:05:21.902 --rc genhtml_legend=1 00:05:21.903 --rc geninfo_all_blocks=1 00:05:21.903 --rc geninfo_unexecuted_blocks=1 00:05:21.903 00:05:21.903 ' 00:05:21.903 10:49:08 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.903 --rc genhtml_branch_coverage=1 00:05:21.903 --rc genhtml_function_coverage=1 00:05:21.903 --rc genhtml_legend=1 00:05:21.903 --rc geninfo_all_blocks=1 00:05:21.903 --rc geninfo_unexecuted_blocks=1 00:05:21.903 00:05:21.903 ' 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:21.903 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:21.903 10:49:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.903 10:49:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.183 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58811 00:05:22.183 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:22.183 10:49:08 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58811 00:05:22.183 10:49:08 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58811 ']' 00:05:22.183 10:49:08 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.183 10:49:08 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.183 10:49:08 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.183 10:49:08 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.183 10:49:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.183 [2024-11-15 10:49:08.870392] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:22.183 [2024-11-15 10:49:08.870533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58811 ] 00:05:22.443 [2024-11-15 10:49:09.054189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.443 [2024-11-15 10:49:09.171533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.443 [2024-11-15 10:49:09.171590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.381 10:49:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.381 10:49:10 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:23.381 10:49:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:23.381 10:49:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58828 00:05:23.381 10:49:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:23.640 [ 00:05:23.640 "bdev_malloc_delete", 00:05:23.640 "bdev_malloc_create", 00:05:23.640 "bdev_null_resize", 00:05:23.640 "bdev_null_delete", 00:05:23.640 "bdev_null_create", 00:05:23.640 "bdev_nvme_cuse_unregister", 00:05:23.640 "bdev_nvme_cuse_register", 00:05:23.640 "bdev_opal_new_user", 00:05:23.640 "bdev_opal_set_lock_state", 00:05:23.640 "bdev_opal_delete", 00:05:23.640 "bdev_opal_get_info", 00:05:23.640 "bdev_opal_create", 00:05:23.641 "bdev_nvme_opal_revert", 00:05:23.641 "bdev_nvme_opal_init", 00:05:23.641 "bdev_nvme_send_cmd", 00:05:23.641 "bdev_nvme_set_keys", 00:05:23.641 "bdev_nvme_get_path_iostat", 00:05:23.641 "bdev_nvme_get_mdns_discovery_info", 00:05:23.641 "bdev_nvme_stop_mdns_discovery", 00:05:23.641 "bdev_nvme_start_mdns_discovery", 00:05:23.641 "bdev_nvme_set_multipath_policy", 00:05:23.641 "bdev_nvme_set_preferred_path", 00:05:23.641 "bdev_nvme_get_io_paths", 00:05:23.641 "bdev_nvme_remove_error_injection", 00:05:23.641 "bdev_nvme_add_error_injection", 00:05:23.641 "bdev_nvme_get_discovery_info", 00:05:23.641 "bdev_nvme_stop_discovery", 00:05:23.641 "bdev_nvme_start_discovery", 00:05:23.641 "bdev_nvme_get_controller_health_info", 00:05:23.641 "bdev_nvme_disable_controller", 00:05:23.641 "bdev_nvme_enable_controller", 00:05:23.641 "bdev_nvme_reset_controller", 00:05:23.641 "bdev_nvme_get_transport_statistics", 00:05:23.641 "bdev_nvme_apply_firmware", 00:05:23.641 "bdev_nvme_detach_controller", 00:05:23.641 "bdev_nvme_get_controllers", 00:05:23.641 "bdev_nvme_attach_controller", 00:05:23.641 "bdev_nvme_set_hotplug", 00:05:23.641 "bdev_nvme_set_options", 00:05:23.641 "bdev_passthru_delete", 00:05:23.641 "bdev_passthru_create", 00:05:23.641 "bdev_lvol_set_parent_bdev", 00:05:23.641 "bdev_lvol_set_parent", 00:05:23.641 "bdev_lvol_check_shallow_copy", 00:05:23.641 "bdev_lvol_start_shallow_copy", 00:05:23.641 "bdev_lvol_grow_lvstore", 00:05:23.641 "bdev_lvol_get_lvols", 00:05:23.641 "bdev_lvol_get_lvstores", 00:05:23.641 "bdev_lvol_delete", 00:05:23.641 "bdev_lvol_set_read_only", 00:05:23.641 "bdev_lvol_resize", 00:05:23.641 "bdev_lvol_decouple_parent", 00:05:23.641 "bdev_lvol_inflate", 00:05:23.641 "bdev_lvol_rename", 00:05:23.641 "bdev_lvol_clone_bdev", 00:05:23.641 "bdev_lvol_clone", 00:05:23.641 "bdev_lvol_snapshot", 00:05:23.641 "bdev_lvol_create", 00:05:23.641 "bdev_lvol_delete_lvstore", 00:05:23.641 "bdev_lvol_rename_lvstore", 00:05:23.641 "bdev_lvol_create_lvstore", 00:05:23.641 "bdev_raid_set_options", 00:05:23.641 "bdev_raid_remove_base_bdev", 00:05:23.641 "bdev_raid_add_base_bdev", 00:05:23.641 "bdev_raid_delete", 00:05:23.641 "bdev_raid_create", 00:05:23.641 "bdev_raid_get_bdevs", 00:05:23.641 "bdev_error_inject_error", 00:05:23.641 "bdev_error_delete", 00:05:23.641 "bdev_error_create", 00:05:23.641 "bdev_split_delete", 00:05:23.641 "bdev_split_create", 00:05:23.641 "bdev_delay_delete", 00:05:23.641 "bdev_delay_create", 00:05:23.641 "bdev_delay_update_latency", 00:05:23.641 "bdev_zone_block_delete", 00:05:23.641 "bdev_zone_block_create", 00:05:23.641 "blobfs_create", 00:05:23.641 "blobfs_detect", 00:05:23.641 "blobfs_set_cache_size", 00:05:23.641 "bdev_xnvme_delete", 00:05:23.641 "bdev_xnvme_create", 00:05:23.641 "bdev_aio_delete", 00:05:23.641 "bdev_aio_rescan", 00:05:23.641 "bdev_aio_create", 00:05:23.641 "bdev_ftl_set_property", 00:05:23.641 "bdev_ftl_get_properties", 00:05:23.641 "bdev_ftl_get_stats", 00:05:23.641 "bdev_ftl_unmap", 00:05:23.641 "bdev_ftl_unload", 00:05:23.641 "bdev_ftl_delete", 00:05:23.641 "bdev_ftl_load", 00:05:23.641 "bdev_ftl_create", 00:05:23.641 "bdev_virtio_attach_controller", 00:05:23.641 "bdev_virtio_scsi_get_devices", 00:05:23.641 "bdev_virtio_detach_controller", 00:05:23.641 "bdev_virtio_blk_set_hotplug", 00:05:23.641 "bdev_iscsi_delete", 00:05:23.641 "bdev_iscsi_create", 00:05:23.641 "bdev_iscsi_set_options", 00:05:23.641 "accel_error_inject_error", 00:05:23.641 "ioat_scan_accel_module", 00:05:23.641 "dsa_scan_accel_module", 00:05:23.641 "iaa_scan_accel_module", 00:05:23.641 "keyring_file_remove_key", 00:05:23.641 "keyring_file_add_key", 00:05:23.641 "keyring_linux_set_options", 00:05:23.641 "fsdev_aio_delete", 00:05:23.641 "fsdev_aio_create", 00:05:23.641 "iscsi_get_histogram", 00:05:23.641 "iscsi_enable_histogram", 00:05:23.641 "iscsi_set_options", 00:05:23.641 "iscsi_get_auth_groups", 00:05:23.641 "iscsi_auth_group_remove_secret", 00:05:23.641 "iscsi_auth_group_add_secret", 00:05:23.641 "iscsi_delete_auth_group", 00:05:23.641 "iscsi_create_auth_group", 00:05:23.641 "iscsi_set_discovery_auth", 00:05:23.641 "iscsi_get_options", 00:05:23.641 "iscsi_target_node_request_logout", 00:05:23.641 "iscsi_target_node_set_redirect", 00:05:23.641 "iscsi_target_node_set_auth", 00:05:23.641 "iscsi_target_node_add_lun", 00:05:23.641 "iscsi_get_stats", 00:05:23.641 "iscsi_get_connections", 00:05:23.641 "iscsi_portal_group_set_auth", 00:05:23.641 "iscsi_start_portal_group", 00:05:23.641 "iscsi_delete_portal_group", 00:05:23.641 "iscsi_create_portal_group", 00:05:23.641 "iscsi_get_portal_groups", 00:05:23.641 "iscsi_delete_target_node", 00:05:23.641 "iscsi_target_node_remove_pg_ig_maps", 00:05:23.641 "iscsi_target_node_add_pg_ig_maps", 00:05:23.641 "iscsi_create_target_node", 00:05:23.641 "iscsi_get_target_nodes", 00:05:23.641 "iscsi_delete_initiator_group", 00:05:23.641 "iscsi_initiator_group_remove_initiators", 00:05:23.641 "iscsi_initiator_group_add_initiators", 00:05:23.641 "iscsi_create_initiator_group", 00:05:23.641 "iscsi_get_initiator_groups", 00:05:23.641 "nvmf_set_crdt", 00:05:23.641 "nvmf_set_config", 00:05:23.641 "nvmf_set_max_subsystems", 00:05:23.641 "nvmf_stop_mdns_prr", 00:05:23.641 "nvmf_publish_mdns_prr", 00:05:23.641 "nvmf_subsystem_get_listeners", 00:05:23.641 "nvmf_subsystem_get_qpairs", 00:05:23.641 "nvmf_subsystem_get_controllers", 00:05:23.641 "nvmf_get_stats", 00:05:23.641 "nvmf_get_transports", 00:05:23.641 "nvmf_create_transport", 00:05:23.641 "nvmf_get_targets", 00:05:23.641 "nvmf_delete_target", 00:05:23.641 "nvmf_create_target", 00:05:23.641 "nvmf_subsystem_allow_any_host", 00:05:23.641 "nvmf_subsystem_set_keys", 00:05:23.641 "nvmf_subsystem_remove_host", 00:05:23.641 "nvmf_subsystem_add_host", 00:05:23.641 "nvmf_ns_remove_host", 00:05:23.641 "nvmf_ns_add_host", 00:05:23.641 "nvmf_subsystem_remove_ns", 00:05:23.641 "nvmf_subsystem_set_ns_ana_group", 00:05:23.641 "nvmf_subsystem_add_ns", 00:05:23.641 "nvmf_subsystem_listener_set_ana_state", 00:05:23.641 "nvmf_discovery_get_referrals", 00:05:23.641 "nvmf_discovery_remove_referral", 00:05:23.641 "nvmf_discovery_add_referral", 00:05:23.641 "nvmf_subsystem_remove_listener", 00:05:23.641 "nvmf_subsystem_add_listener", 00:05:23.641 "nvmf_delete_subsystem", 00:05:23.641 "nvmf_create_subsystem", 00:05:23.641 "nvmf_get_subsystems", 00:05:23.641 "env_dpdk_get_mem_stats", 00:05:23.641 "nbd_get_disks", 00:05:23.641 "nbd_stop_disk", 00:05:23.641 "nbd_start_disk", 00:05:23.641 "ublk_recover_disk", 00:05:23.641 "ublk_get_disks", 00:05:23.641 "ublk_stop_disk", 00:05:23.641 "ublk_start_disk", 00:05:23.641 "ublk_destroy_target", 00:05:23.641 "ublk_create_target", 00:05:23.641 "virtio_blk_create_transport", 00:05:23.641 "virtio_blk_get_transports", 00:05:23.641 "vhost_controller_set_coalescing", 00:05:23.641 "vhost_get_controllers", 00:05:23.641 "vhost_delete_controller", 00:05:23.641 "vhost_create_blk_controller", 00:05:23.641 "vhost_scsi_controller_remove_target", 00:05:23.641 "vhost_scsi_controller_add_target", 00:05:23.641 "vhost_start_scsi_controller", 00:05:23.641 "vhost_create_scsi_controller", 00:05:23.641 "thread_set_cpumask", 00:05:23.641 "scheduler_set_options", 00:05:23.641 "framework_get_governor", 00:05:23.641 "framework_get_scheduler", 00:05:23.641 "framework_set_scheduler", 00:05:23.641 "framework_get_reactors", 00:05:23.641 "thread_get_io_channels", 00:05:23.641 "thread_get_pollers", 00:05:23.641 "thread_get_stats", 00:05:23.641 "framework_monitor_context_switch", 00:05:23.641 "spdk_kill_instance", 00:05:23.641 "log_enable_timestamps", 00:05:23.641 "log_get_flags", 00:05:23.641 "log_clear_flag", 00:05:23.641 "log_set_flag", 00:05:23.641 "log_get_level", 00:05:23.641 "log_set_level", 00:05:23.641 "log_get_print_level", 00:05:23.641 "log_set_print_level", 00:05:23.641 "framework_enable_cpumask_locks", 00:05:23.641 "framework_disable_cpumask_locks", 00:05:23.641 "framework_wait_init", 00:05:23.641 "framework_start_init", 00:05:23.641 "scsi_get_devices", 00:05:23.641 "bdev_get_histogram", 00:05:23.641 "bdev_enable_histogram", 00:05:23.641 "bdev_set_qos_limit", 00:05:23.641 "bdev_set_qd_sampling_period", 00:05:23.641 "bdev_get_bdevs", 00:05:23.641 "bdev_reset_iostat", 00:05:23.641 "bdev_get_iostat", 00:05:23.641 "bdev_examine", 00:05:23.641 "bdev_wait_for_examine", 00:05:23.641 "bdev_set_options", 00:05:23.641 "accel_get_stats", 00:05:23.641 "accel_set_options", 00:05:23.641 "accel_set_driver", 00:05:23.641 "accel_crypto_key_destroy", 00:05:23.641 "accel_crypto_keys_get", 00:05:23.641 "accel_crypto_key_create", 00:05:23.641 "accel_assign_opc", 00:05:23.641 "accel_get_module_info", 00:05:23.641 "accel_get_opc_assignments", 00:05:23.641 "vmd_rescan", 00:05:23.641 "vmd_remove_device", 00:05:23.641 "vmd_enable", 00:05:23.641 "sock_get_default_impl", 00:05:23.641 "sock_set_default_impl", 00:05:23.641 "sock_impl_set_options", 00:05:23.641 "sock_impl_get_options", 00:05:23.641 "iobuf_get_stats", 00:05:23.641 "iobuf_set_options", 00:05:23.641 "keyring_get_keys", 00:05:23.641 "framework_get_pci_devices", 00:05:23.641 "framework_get_config", 00:05:23.641 "framework_get_subsystems", 00:05:23.641 "fsdev_set_opts", 00:05:23.641 "fsdev_get_opts", 00:05:23.641 "trace_get_info", 00:05:23.641 "trace_get_tpoint_group_mask", 00:05:23.641 "trace_disable_tpoint_group", 00:05:23.642 "trace_enable_tpoint_group", 00:05:23.642 "trace_clear_tpoint_mask", 00:05:23.642 "trace_set_tpoint_mask", 00:05:23.642 "notify_get_notifications", 00:05:23.642 "notify_get_types", 00:05:23.642 "spdk_get_version", 00:05:23.642 "rpc_get_methods" 00:05:23.642 ] 00:05:23.642 10:49:10 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.642 10:49:10 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:23.642 10:49:10 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58811 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58811 ']' 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58811 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58811 00:05:23.642 killing process with pid 58811 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58811' 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58811 00:05:23.642 10:49:10 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58811 00:05:26.176 ************************************ 00:05:26.176 END TEST spdkcli_tcp 00:05:26.176 ************************************ 00:05:26.176 00:05:26.176 real 0m4.315s 00:05:26.176 user 0m7.690s 00:05:26.176 sys 0m0.690s 00:05:26.176 10:49:12 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.176 10:49:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.176 10:49:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.176 10:49:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.176 10:49:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.176 10:49:12 -- common/autotest_common.sh@10 -- # set +x 00:05:26.176 ************************************ 00:05:26.176 START TEST dpdk_mem_utility 00:05:26.176 ************************************ 00:05:26.176 10:49:12 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.176 * Looking for test storage... 00:05:26.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:26.176 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.176 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.176 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.436 10:49:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.436 --rc genhtml_branch_coverage=1 00:05:26.436 --rc genhtml_function_coverage=1 00:05:26.436 --rc genhtml_legend=1 00:05:26.436 --rc geninfo_all_blocks=1 00:05:26.436 --rc geninfo_unexecuted_blocks=1 00:05:26.436 00:05:26.436 ' 00:05:26.436 10:49:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:26.436 10:49:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:26.436 10:49:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58933 00:05:26.436 10:49:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58933 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58933 ']' 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.436 10:49:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.436 [2024-11-15 10:49:13.211576] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:26.436 [2024-11-15 10:49:13.211891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58933 ] 00:05:26.695 [2024-11-15 10:49:13.392347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.695 [2024-11-15 10:49:13.502962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.634 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.634 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:27.634 10:49:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.634 10:49:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.634 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.634 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.634 { 00:05:27.634 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.634 } 00:05:27.634 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.634 10:49:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:27.634 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:27.634 1 heaps totaling size 816.000000 MiB 00:05:27.634 size: 816.000000 MiB heap id: 0 00:05:27.634 end heaps---------- 00:05:27.634 9 mempools totaling size 595.772034 MiB 00:05:27.634 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.634 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.634 size: 92.545471 MiB name: bdev_io_58933 00:05:27.634 size: 50.003479 MiB name: msgpool_58933 00:05:27.634 size: 36.509338 MiB name: fsdev_io_58933 00:05:27.634 size: 21.763794 MiB name: PDU_Pool 00:05:27.634 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.634 size: 4.133484 MiB name: evtpool_58933 00:05:27.634 size: 0.026123 MiB name: Session_Pool 00:05:27.634 end mempools------- 00:05:27.634 6 memzones totaling size 4.142822 MiB 00:05:27.634 size: 1.000366 MiB name: RG_ring_0_58933 00:05:27.634 size: 1.000366 MiB name: RG_ring_1_58933 00:05:27.634 size: 1.000366 MiB name: RG_ring_4_58933 00:05:27.634 size: 1.000366 MiB name: RG_ring_5_58933 00:05:27.634 size: 0.125366 MiB name: RG_ring_2_58933 00:05:27.634 size: 0.015991 MiB name: RG_ring_3_58933 00:05:27.634 end memzones------- 00:05:27.634 10:49:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.634 heap id: 0 total size: 816.000000 MiB number of busy elements: 312 number of free elements: 18 00:05:27.634 list of free elements. size: 16.792114 MiB 00:05:27.634 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:27.634 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:27.634 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:27.634 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:27.634 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:27.634 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:27.634 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:27.634 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:27.634 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:27.634 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:27.634 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:27.634 element at address: 0x20001ac00000 with size: 0.562683 MiB 00:05:27.634 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:27.634 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:27.634 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:27.634 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:27.634 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:27.634 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:27.634 list of standard malloc elements. size: 199.286987 MiB 00:05:27.634 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:27.634 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:27.634 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:27.634 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:27.634 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:27.634 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:27.634 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:27.634 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:27.634 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:27.634 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:27.634 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:27.634 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:27.634 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:27.635 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:27.898 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:27.899 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:27.899 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:27.899 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:27.899 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:27.900 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:27.900 list of memzone associated elements. size: 599.920898 MiB 00:05:27.900 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:27.900 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.900 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:27.900 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.900 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:27.900 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58933_0 00:05:27.900 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:27.900 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58933_0 00:05:27.900 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:27.900 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58933_0 00:05:27.900 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:27.900 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.900 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:27.900 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.900 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:27.900 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58933_0 00:05:27.900 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:27.900 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58933 00:05:27.900 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:27.900 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58933 00:05:27.900 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:27.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.900 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:27.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.900 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:27.900 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.900 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:27.900 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.900 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:27.900 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58933 00:05:27.900 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:27.900 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58933 00:05:27.900 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:27.900 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58933 00:05:27.900 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:27.900 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58933 00:05:27.900 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:27.900 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58933 00:05:27.900 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:27.900 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58933 00:05:27.900 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:27.900 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.900 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:27.900 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.900 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:27.900 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.900 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:27.900 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58933 00:05:27.900 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:27.900 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58933 00:05:27.900 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:27.900 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.900 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:27.900 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.900 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:27.900 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58933 00:05:27.900 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:27.900 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.900 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:27.900 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58933 00:05:27.900 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:27.900 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58933 00:05:27.900 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:27.900 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58933 00:05:27.900 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:27.900 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.900 10:49:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.900 10:49:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58933 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58933 ']' 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58933 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58933 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.900 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.901 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58933' 00:05:27.901 killing process with pid 58933 00:05:27.901 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58933 00:05:27.901 10:49:14 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58933 00:05:30.487 ************************************ 00:05:30.487 END TEST dpdk_mem_utility 00:05:30.487 ************************************ 00:05:30.487 00:05:30.487 real 0m4.041s 00:05:30.487 user 0m3.901s 00:05:30.487 sys 0m0.600s 00:05:30.487 10:49:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.487 10:49:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.487 10:49:16 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.487 10:49:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.487 10:49:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.487 10:49:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.488 ************************************ 00:05:30.488 START TEST event 00:05:30.488 ************************************ 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:30.488 * Looking for test storage... 00:05:30.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.488 10:49:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.488 10:49:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.488 10:49:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.488 10:49:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.488 10:49:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.488 10:49:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.488 10:49:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.488 10:49:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.488 10:49:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.488 10:49:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.488 10:49:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.488 10:49:17 event -- scripts/common.sh@344 -- # case "$op" in 00:05:30.488 10:49:17 event -- scripts/common.sh@345 -- # : 1 00:05:30.488 10:49:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.488 10:49:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.488 10:49:17 event -- scripts/common.sh@365 -- # decimal 1 00:05:30.488 10:49:17 event -- scripts/common.sh@353 -- # local d=1 00:05:30.488 10:49:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.488 10:49:17 event -- scripts/common.sh@355 -- # echo 1 00:05:30.488 10:49:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.488 10:49:17 event -- scripts/common.sh@366 -- # decimal 2 00:05:30.488 10:49:17 event -- scripts/common.sh@353 -- # local d=2 00:05:30.488 10:49:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.488 10:49:17 event -- scripts/common.sh@355 -- # echo 2 00:05:30.488 10:49:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.488 10:49:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.488 10:49:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.488 10:49:17 event -- scripts/common.sh@368 -- # return 0 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.488 --rc genhtml_branch_coverage=1 00:05:30.488 --rc genhtml_function_coverage=1 00:05:30.488 --rc genhtml_legend=1 00:05:30.488 --rc geninfo_all_blocks=1 00:05:30.488 --rc geninfo_unexecuted_blocks=1 00:05:30.488 00:05:30.488 ' 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.488 --rc genhtml_branch_coverage=1 00:05:30.488 --rc genhtml_function_coverage=1 00:05:30.488 --rc genhtml_legend=1 00:05:30.488 --rc geninfo_all_blocks=1 00:05:30.488 --rc geninfo_unexecuted_blocks=1 00:05:30.488 00:05:30.488 ' 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.488 --rc genhtml_branch_coverage=1 00:05:30.488 --rc genhtml_function_coverage=1 00:05:30.488 --rc genhtml_legend=1 00:05:30.488 --rc geninfo_all_blocks=1 00:05:30.488 --rc geninfo_unexecuted_blocks=1 00:05:30.488 00:05:30.488 ' 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.488 --rc genhtml_branch_coverage=1 00:05:30.488 --rc genhtml_function_coverage=1 00:05:30.488 --rc genhtml_legend=1 00:05:30.488 --rc geninfo_all_blocks=1 00:05:30.488 --rc geninfo_unexecuted_blocks=1 00:05:30.488 00:05:30.488 ' 00:05:30.488 10:49:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:30.488 10:49:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:30.488 10:49:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:30.488 10:49:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.488 10:49:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.488 ************************************ 00:05:30.488 START TEST event_perf 00:05:30.488 ************************************ 00:05:30.488 10:49:17 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:30.488 Running I/O for 1 seconds...[2024-11-15 10:49:17.259587] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:30.488 [2024-11-15 10:49:17.259796] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59041 ] 00:05:30.748 [2024-11-15 10:49:17.443336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:30.748 [2024-11-15 10:49:17.559698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.748 [2024-11-15 10:49:17.560020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.748 [2024-11-15 10:49:17.559833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.748 Running I/O for 1 seconds...[2024-11-15 10:49:17.559986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.125 00:05:32.125 lcore 0: 210116 00:05:32.125 lcore 1: 210114 00:05:32.125 lcore 2: 210113 00:05:32.125 lcore 3: 210113 00:05:32.125 done. 00:05:32.125 00:05:32.125 real 0m1.600s 00:05:32.125 user 0m4.350s 00:05:32.125 sys 0m0.129s 00:05:32.125 10:49:18 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.125 10:49:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.125 ************************************ 00:05:32.125 END TEST event_perf 00:05:32.125 ************************************ 00:05:32.125 10:49:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:32.125 10:49:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.125 10:49:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.125 10:49:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.125 ************************************ 00:05:32.125 START TEST event_reactor 00:05:32.125 ************************************ 00:05:32.125 10:49:18 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:32.125 [2024-11-15 10:49:18.923967] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:32.125 [2024-11-15 10:49:18.924207] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59080 ] 00:05:32.384 [2024-11-15 10:49:19.107326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.384 [2024-11-15 10:49:19.227860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.762 test_start 00:05:33.762 oneshot 00:05:33.762 tick 100 00:05:33.762 tick 100 00:05:33.762 tick 250 00:05:33.763 tick 100 00:05:33.763 tick 100 00:05:33.763 tick 250 00:05:33.763 tick 100 00:05:33.763 tick 500 00:05:33.763 tick 100 00:05:33.763 tick 100 00:05:33.763 tick 250 00:05:33.763 tick 100 00:05:33.763 tick 100 00:05:33.763 test_end 00:05:33.763 00:05:33.763 real 0m1.583s 00:05:33.763 user 0m1.370s 00:05:33.763 sys 0m0.103s 00:05:33.763 10:49:20 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.763 10:49:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.763 ************************************ 00:05:33.763 END TEST event_reactor 00:05:33.763 ************************************ 00:05:33.763 10:49:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.763 10:49:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:33.763 10:49:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.763 10:49:20 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.763 ************************************ 00:05:33.763 START TEST event_reactor_perf 00:05:33.763 ************************************ 00:05:33.763 10:49:20 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:33.763 [2024-11-15 10:49:20.574848] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:33.763 [2024-11-15 10:49:20.575132] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59117 ] 00:05:34.022 [2024-11-15 10:49:20.753702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.022 [2024-11-15 10:49:20.868627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.401 test_start 00:05:35.401 test_end 00:05:35.401 Performance: 377263 events per second 00:05:35.401 00:05:35.401 real 0m1.571s 00:05:35.401 user 0m1.360s 00:05:35.401 sys 0m0.102s 00:05:35.401 10:49:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.401 ************************************ 00:05:35.401 END TEST event_reactor_perf 00:05:35.401 ************************************ 00:05:35.401 10:49:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.401 10:49:22 event -- event/event.sh@49 -- # uname -s 00:05:35.401 10:49:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.401 10:49:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.401 10:49:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.401 10:49:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.401 10:49:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.401 ************************************ 00:05:35.401 START TEST event_scheduler 00:05:35.401 ************************************ 00:05:35.401 10:49:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:35.661 * Looking for test storage... 00:05:35.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.661 10:49:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.661 --rc genhtml_branch_coverage=1 00:05:35.661 --rc genhtml_function_coverage=1 00:05:35.661 --rc genhtml_legend=1 00:05:35.661 --rc geninfo_all_blocks=1 00:05:35.661 --rc geninfo_unexecuted_blocks=1 00:05:35.661 00:05:35.661 ' 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.661 --rc genhtml_branch_coverage=1 00:05:35.661 --rc genhtml_function_coverage=1 00:05:35.661 --rc genhtml_legend=1 00:05:35.661 --rc geninfo_all_blocks=1 00:05:35.661 --rc geninfo_unexecuted_blocks=1 00:05:35.661 00:05:35.661 ' 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.661 --rc genhtml_branch_coverage=1 00:05:35.661 --rc genhtml_function_coverage=1 00:05:35.661 --rc genhtml_legend=1 00:05:35.661 --rc geninfo_all_blocks=1 00:05:35.661 --rc geninfo_unexecuted_blocks=1 00:05:35.661 00:05:35.661 ' 00:05:35.661 10:49:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.661 --rc genhtml_branch_coverage=1 00:05:35.661 --rc genhtml_function_coverage=1 00:05:35.661 --rc genhtml_legend=1 00:05:35.661 --rc geninfo_all_blocks=1 00:05:35.661 --rc geninfo_unexecuted_blocks=1 00:05:35.661 00:05:35.661 ' 00:05:35.661 10:49:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.661 10:49:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59193 00:05:35.661 10:49:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.661 10:49:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.661 10:49:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59193 00:05:35.662 10:49:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59193 ']' 00:05:35.662 10:49:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.662 10:49:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.662 10:49:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.662 10:49:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.662 10:49:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.662 [2024-11-15 10:49:22.483163] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:35.662 [2024-11-15 10:49:22.483505] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59193 ] 00:05:35.920 [2024-11-15 10:49:22.662996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.179 [2024-11-15 10:49:22.787143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.179 [2024-11-15 10:49:22.787481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.179 [2024-11-15 10:49:22.787274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.179 [2024-11-15 10:49:22.787439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:36.747 10:49:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.747 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.747 POWER: Cannot set governor of lcore 0 to performance 00:05:36.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.747 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.747 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:36.747 POWER: Cannot set governor of lcore 0 to userspace 00:05:36.747 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:36.747 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:36.747 POWER: Unable to set Power Management Environment for lcore 0 00:05:36.747 [2024-11-15 10:49:23.329271] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:36.747 [2024-11-15 10:49:23.329324] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:36.747 [2024-11-15 10:49:23.329358] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:36.747 [2024-11-15 10:49:23.329440] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:36.747 [2024-11-15 10:49:23.329475] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:36.747 [2024-11-15 10:49:23.329567] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.747 10:49:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.747 10:49:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 [2024-11-15 10:49:23.671394] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.007 10:49:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.007 10:49:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.007 10:49:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 ************************************ 00:05:37.007 START TEST scheduler_create_thread 00:05:37.007 ************************************ 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 2 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 3 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 4 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 5 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 6 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 7 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 8 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 9 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 10 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.007 10:49:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.945 10:49:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.945 10:49:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.945 10:49:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.945 10:49:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.945 10:49:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.322 ************************************ 00:05:39.322 END TEST scheduler_create_thread 00:05:39.322 ************************************ 00:05:39.322 10:49:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.322 00:05:39.322 real 0m2.138s 00:05:39.322 user 0m0.026s 00:05:39.322 sys 0m0.007s 00:05:39.322 10:49:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.322 10:49:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.322 10:49:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.322 10:49:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59193 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59193 ']' 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59193 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59193 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.322 killing process with pid 59193 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59193' 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59193 00:05:39.322 10:49:25 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59193 00:05:39.581 [2024-11-15 10:49:26.304713] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:40.960 ************************************ 00:05:40.960 END TEST event_scheduler 00:05:40.960 ************************************ 00:05:40.960 00:05:40.960 real 0m5.292s 00:05:40.960 user 0m8.715s 00:05:40.960 sys 0m0.553s 00:05:40.960 10:49:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.960 10:49:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.960 10:49:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:40.960 10:49:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:40.960 10:49:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.960 10:49:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.960 10:49:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.960 ************************************ 00:05:40.960 START TEST app_repeat 00:05:40.960 ************************************ 00:05:40.960 10:49:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:40.960 Process app_repeat pid: 59299 00:05:40.960 spdk_app_start Round 0 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59299 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59299' 00:05:40.960 10:49:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:40.961 10:49:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:40.961 10:49:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59299 /var/tmp/spdk-nbd.sock 00:05:40.961 10:49:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:05:40.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.961 10:49:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.961 10:49:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.961 10:49:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.961 10:49:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.961 10:49:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.961 [2024-11-15 10:49:27.615709] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:05:40.961 [2024-11-15 10:49:27.615999] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59299 ] 00:05:40.961 [2024-11-15 10:49:27.798397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.220 [2024-11-15 10:49:27.915779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.220 [2024-11-15 10:49:27.915809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.789 10:49:28 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.789 10:49:28 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:41.789 10:49:28 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.048 Malloc0 00:05:42.048 10:49:28 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:42.307 Malloc1 00:05:42.307 10:49:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.307 10:49:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:42.566 /dev/nbd0 00:05:42.566 10:49:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:42.566 10:49:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.566 1+0 records in 00:05:42.566 1+0 records out 00:05:42.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261946 s, 15.6 MB/s 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.566 10:49:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.566 10:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.566 10:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.566 10:49:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:42.826 /dev/nbd1 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:42.826 1+0 records in 00:05:42.826 1+0 records out 00:05:42.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038144 s, 10.7 MB/s 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:42.826 10:49:29 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:42.826 10:49:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:43.094 { 00:05:43.094 "nbd_device": "/dev/nbd0", 00:05:43.094 "bdev_name": "Malloc0" 00:05:43.094 }, 00:05:43.094 { 00:05:43.094 "nbd_device": "/dev/nbd1", 00:05:43.094 "bdev_name": "Malloc1" 00:05:43.094 } 00:05:43.094 ]' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:43.094 { 00:05:43.094 "nbd_device": "/dev/nbd0", 00:05:43.094 "bdev_name": "Malloc0" 00:05:43.094 }, 00:05:43.094 { 00:05:43.094 "nbd_device": "/dev/nbd1", 00:05:43.094 "bdev_name": "Malloc1" 00:05:43.094 } 00:05:43.094 ]' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:43.094 /dev/nbd1' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:43.094 /dev/nbd1' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:43.094 256+0 records in 00:05:43.094 256+0 records out 00:05:43.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118672 s, 88.4 MB/s 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:43.094 256+0 records in 00:05:43.094 256+0 records out 00:05:43.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0295975 s, 35.4 MB/s 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:43.094 256+0 records in 00:05:43.094 256+0 records out 00:05:43.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0355682 s, 29.5 MB/s 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.094 10:49:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.095 10:49:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:43.358 10:49:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:43.359 10:49:30 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.618 10:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:43.877 10:49:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:43.877 10:49:30 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:44.445 10:49:31 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.384 [2024-11-15 10:49:32.173995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.644 [2024-11-15 10:49:32.287759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.644 [2024-11-15 10:49:32.287811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.644 [2024-11-15 10:49:32.481395] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.644 [2024-11-15 10:49:32.481487] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:47.550 10:49:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:47.550 spdk_app_start Round 1 00:05:47.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:47.550 10:49:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:47.550 10:49:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59299 /var/tmp/spdk-nbd.sock 00:05:47.550 10:49:34 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:05:47.550 10:49:34 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:47.550 10:49:34 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.551 10:49:34 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:47.551 10:49:34 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.551 10:49:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:47.551 10:49:34 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.551 10:49:34 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:47.551 10:49:34 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:47.810 Malloc0 00:05:47.810 10:49:34 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:48.126 Malloc1 00:05:48.126 10:49:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.126 10:49:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:48.394 /dev/nbd0 00:05:48.394 10:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:48.394 10:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.394 1+0 records in 00:05:48.394 1+0 records out 00:05:48.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00103933 s, 3.9 MB/s 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.394 10:49:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.394 10:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.394 10:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.394 10:49:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:48.653 /dev/nbd1 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:48.653 1+0 records in 00:05:48.653 1+0 records out 00:05:48.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490344 s, 8.4 MB/s 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:48.653 10:49:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.653 10:49:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:48.913 { 00:05:48.913 "nbd_device": "/dev/nbd0", 00:05:48.913 "bdev_name": "Malloc0" 00:05:48.913 }, 00:05:48.913 { 00:05:48.913 "nbd_device": "/dev/nbd1", 00:05:48.913 "bdev_name": "Malloc1" 00:05:48.913 } 00:05:48.913 ]' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:48.913 { 00:05:48.913 "nbd_device": "/dev/nbd0", 00:05:48.913 "bdev_name": "Malloc0" 00:05:48.913 }, 00:05:48.913 { 00:05:48.913 "nbd_device": "/dev/nbd1", 00:05:48.913 "bdev_name": "Malloc1" 00:05:48.913 } 00:05:48.913 ]' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:48.913 /dev/nbd1' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:48.913 /dev/nbd1' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:48.913 256+0 records in 00:05:48.913 256+0 records out 00:05:48.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121428 s, 86.4 MB/s 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:48.913 256+0 records in 00:05:48.913 256+0 records out 00:05:48.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0264957 s, 39.6 MB/s 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:48.913 256+0 records in 00:05:48.913 256+0 records out 00:05:48.913 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.034409 s, 30.5 MB/s 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:48.913 10:49:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.173 10:49:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.432 10:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:49.692 10:49:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:49.692 10:49:36 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:49.952 10:49:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:51.327 [2024-11-15 10:49:37.935374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:51.327 [2024-11-15 10:49:38.046859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.327 [2024-11-15 10:49:38.046879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.585 [2024-11-15 10:49:38.242723] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:51.585 [2024-11-15 10:49:38.242818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:52.961 spdk_app_start Round 2 00:05:52.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:52.961 10:49:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:52.961 10:49:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:52.961 10:49:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59299 /var/tmp/spdk-nbd.sock 00:05:52.961 10:49:39 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:05:52.961 10:49:39 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:52.961 10:49:39 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.961 10:49:39 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:52.961 10:49:39 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.961 10:49:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.220 10:49:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.220 10:49:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.220 10:49:40 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.479 Malloc0 00:05:53.479 10:49:40 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.738 Malloc1 00:05:53.738 10:49:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.738 10:49:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:53.997 /dev/nbd0 00:05:53.997 10:49:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:53.997 10:49:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:53.997 1+0 records in 00:05:53.997 1+0 records out 00:05:53.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748631 s, 5.5 MB/s 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:53.997 10:49:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:53.997 10:49:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:53.997 10:49:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:53.997 10:49:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.256 /dev/nbd1 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.256 1+0 records in 00:05:54.256 1+0 records out 00:05:54.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274076 s, 14.9 MB/s 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.256 10:49:41 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.256 10:49:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.515 { 00:05:54.515 "nbd_device": "/dev/nbd0", 00:05:54.515 "bdev_name": "Malloc0" 00:05:54.515 }, 00:05:54.515 { 00:05:54.515 "nbd_device": "/dev/nbd1", 00:05:54.515 "bdev_name": "Malloc1" 00:05:54.515 } 00:05:54.515 ]' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.515 { 00:05:54.515 "nbd_device": "/dev/nbd0", 00:05:54.515 "bdev_name": "Malloc0" 00:05:54.515 }, 00:05:54.515 { 00:05:54.515 "nbd_device": "/dev/nbd1", 00:05:54.515 "bdev_name": "Malloc1" 00:05:54.515 } 00:05:54.515 ]' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.515 /dev/nbd1' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.515 /dev/nbd1' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.515 10:49:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.774 256+0 records in 00:05:54.774 256+0 records out 00:05:54.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129673 s, 80.9 MB/s 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.774 256+0 records in 00:05:54.774 256+0 records out 00:05:54.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254186 s, 41.3 MB/s 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.774 256+0 records in 00:05:54.774 256+0 records out 00:05:54.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.033 s, 31.8 MB/s 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.774 10:49:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.033 10:49:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.292 10:49:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.292 10:49:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.557 10:49:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.557 10:49:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.557 10:49:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.557 10:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.557 10:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.558 10:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.558 10:49:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.558 10:49:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.558 10:49:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.558 10:49:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.558 10:49:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.558 10:49:42 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.817 10:49:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.196 [2024-11-15 10:49:43.744604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.196 [2024-11-15 10:49:43.859258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.196 [2024-11-15 10:49:43.859259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.454 [2024-11-15 10:49:44.054340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.454 [2024-11-15 10:49:44.054433] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:58.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:58.832 10:49:45 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59299 /var/tmp/spdk-nbd.sock 00:05:58.832 10:49:45 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59299 ']' 00:05:58.832 10:49:45 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:58.832 10:49:45 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.832 10:49:45 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:58.832 10:49:45 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.832 10:49:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.091 10:49:45 event.app_repeat -- event/event.sh@39 -- # killprocess 59299 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59299 ']' 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59299 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59299 00:05:59.091 killing process with pid 59299 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59299' 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59299 00:05:59.091 10:49:45 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59299 00:06:00.065 spdk_app_start is called in Round 0. 00:06:00.065 Shutdown signal received, stop current app iteration 00:06:00.065 Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 reinitialization... 00:06:00.065 spdk_app_start is called in Round 1. 00:06:00.065 Shutdown signal received, stop current app iteration 00:06:00.065 Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 reinitialization... 00:06:00.065 spdk_app_start is called in Round 2. 00:06:00.065 Shutdown signal received, stop current app iteration 00:06:00.065 Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 reinitialization... 00:06:00.065 spdk_app_start is called in Round 3. 00:06:00.065 Shutdown signal received, stop current app iteration 00:06:00.324 10:49:46 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.324 10:49:46 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:00.324 00:06:00.324 real 0m19.354s 00:06:00.324 user 0m41.015s 00:06:00.324 sys 0m3.174s 00:06:00.324 10:49:46 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.324 ************************************ 00:06:00.324 END TEST app_repeat 00:06:00.324 ************************************ 00:06:00.324 10:49:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.324 10:49:46 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.324 10:49:46 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.324 10:49:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.324 10:49:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.324 10:49:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.324 ************************************ 00:06:00.324 START TEST cpu_locks 00:06:00.324 ************************************ 00:06:00.324 10:49:46 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:00.324 * Looking for test storage... 00:06:00.324 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:00.324 10:49:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.324 10:49:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.324 10:49:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.584 10:49:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.584 10:49:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:00.584 10:49:47 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.584 10:49:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.584 --rc genhtml_branch_coverage=1 00:06:00.584 --rc genhtml_function_coverage=1 00:06:00.584 --rc genhtml_legend=1 00:06:00.584 --rc geninfo_all_blocks=1 00:06:00.584 --rc geninfo_unexecuted_blocks=1 00:06:00.584 00:06:00.584 ' 00:06:00.584 10:49:47 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.584 --rc genhtml_branch_coverage=1 00:06:00.584 --rc genhtml_function_coverage=1 00:06:00.584 --rc genhtml_legend=1 00:06:00.584 --rc geninfo_all_blocks=1 00:06:00.584 --rc geninfo_unexecuted_blocks=1 00:06:00.584 00:06:00.584 ' 00:06:00.584 10:49:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.584 --rc genhtml_branch_coverage=1 00:06:00.584 --rc genhtml_function_coverage=1 00:06:00.584 --rc genhtml_legend=1 00:06:00.584 --rc geninfo_all_blocks=1 00:06:00.584 --rc geninfo_unexecuted_blocks=1 00:06:00.584 00:06:00.584 ' 00:06:00.584 10:49:47 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.585 --rc genhtml_branch_coverage=1 00:06:00.585 --rc genhtml_function_coverage=1 00:06:00.585 --rc genhtml_legend=1 00:06:00.585 --rc geninfo_all_blocks=1 00:06:00.585 --rc geninfo_unexecuted_blocks=1 00:06:00.585 00:06:00.585 ' 00:06:00.585 10:49:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:00.585 10:49:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:00.585 10:49:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:00.585 10:49:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:00.585 10:49:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.585 10:49:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.585 10:49:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.585 ************************************ 00:06:00.585 START TEST default_locks 00:06:00.585 ************************************ 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59748 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59748 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59748 ']' 00:06:00.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.585 10:49:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.585 [2024-11-15 10:49:47.336807] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:00.585 [2024-11-15 10:49:47.337117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59748 ] 00:06:00.844 [2024-11-15 10:49:47.518691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.844 [2024-11-15 10:49:47.629775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.783 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.783 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:01.783 10:49:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59748 00:06:01.783 10:49:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59748 00:06:01.783 10:49:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59748 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59748 ']' 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59748 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59748 00:06:02.042 killing process with pid 59748 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59748' 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59748 00:06:02.042 10:49:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59748 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59748 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59748 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59748 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59748 ']' 00:06:04.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.580 ERROR: process (pid: 59748) is no longer running 00:06:04.580 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59748) - No such process 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:04.580 00:06:04.580 real 0m4.043s 00:06:04.580 user 0m3.980s 00:06:04.580 sys 0m0.658s 00:06:04.580 ************************************ 00:06:04.580 END TEST default_locks 00:06:04.580 ************************************ 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.580 10:49:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.580 10:49:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:04.580 10:49:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.580 10:49:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.580 10:49:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.580 ************************************ 00:06:04.580 START TEST default_locks_via_rpc 00:06:04.580 ************************************ 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59823 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59823 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59823 ']' 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.580 10:49:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.839 [2024-11-15 10:49:51.452407] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:04.839 [2024-11-15 10:49:51.452556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59823 ] 00:06:04.840 [2024-11-15 10:49:51.634673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.099 [2024-11-15 10:49:51.751594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59823 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59823 00:06:06.049 10:49:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.331 10:49:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59823 00:06:06.331 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59823 ']' 00:06:06.331 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59823 00:06:06.331 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59823 00:06:06.591 killing process with pid 59823 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59823' 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59823 00:06:06.591 10:49:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59823 00:06:09.128 00:06:09.128 real 0m4.294s 00:06:09.128 user 0m4.256s 00:06:09.128 sys 0m0.749s 00:06:09.128 10:49:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.128 10:49:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 ************************************ 00:06:09.128 END TEST default_locks_via_rpc 00:06:09.128 ************************************ 00:06:09.128 10:49:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:09.128 10:49:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.128 10:49:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.128 10:49:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 ************************************ 00:06:09.128 START TEST non_locking_app_on_locked_coremask 00:06:09.128 ************************************ 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59897 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59897 /var/tmp/spdk.sock 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59897 ']' 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.128 10:49:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.128 [2024-11-15 10:49:55.818968] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:09.128 [2024-11-15 10:49:55.819101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59897 ] 00:06:09.388 [2024-11-15 10:49:56.003087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.388 [2024-11-15 10:49:56.116846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59913 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59913 /var/tmp/spdk2.sock 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59913 ']' 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.327 10:49:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.327 [2024-11-15 10:49:57.075448] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:10.327 [2024-11-15 10:49:57.075737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59913 ] 00:06:10.586 [2024-11-15 10:49:57.260626] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.586 [2024-11-15 10:49:57.260705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.846 [2024-11-15 10:49:57.494305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.383 10:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.383 10:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.383 10:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59897 00:06:13.383 10:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59897 00:06:13.383 10:49:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.978 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59897 00:06:13.978 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59897 ']' 00:06:13.978 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59897 00:06:13.978 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59897 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.979 killing process with pid 59897 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59897' 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59897 00:06:13.979 10:50:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59897 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59913 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59913 ']' 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59913 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59913 00:06:19.270 killing process with pid 59913 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59913' 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59913 00:06:19.270 10:50:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59913 00:06:21.178 00:06:21.178 real 0m12.061s 00:06:21.178 user 0m12.376s 00:06:21.178 sys 0m1.462s 00:06:21.178 10:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.178 10:50:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.178 ************************************ 00:06:21.178 END TEST non_locking_app_on_locked_coremask 00:06:21.178 ************************************ 00:06:21.178 10:50:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.178 10:50:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.178 10:50:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.178 10:50:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.178 ************************************ 00:06:21.178 START TEST locking_app_on_unlocked_coremask 00:06:21.178 ************************************ 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60070 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60070 /var/tmp/spdk.sock 00:06:21.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60070 ']' 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.178 10:50:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.178 [2024-11-15 10:50:07.957023] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:21.178 [2024-11-15 10:50:07.957336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60070 ] 00:06:21.437 [2024-11-15 10:50:08.141492] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.437 [2024-11-15 10:50:08.141557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.437 [2024-11-15 10:50:08.257900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60086 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60086 /var/tmp/spdk2.sock 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60086 ']' 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.395 10:50:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.395 [2024-11-15 10:50:09.233651] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:22.395 [2024-11-15 10:50:09.233771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60086 ] 00:06:22.654 [2024-11-15 10:50:09.417535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.913 [2024-11-15 10:50:09.638155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.446 10:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.446 10:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:25.446 10:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60086 00:06:25.446 10:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60086 00:06:25.446 10:50:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60070 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60070 ']' 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60070 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60070 00:06:26.015 killing process with pid 60070 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60070' 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60070 00:06:26.015 10:50:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60070 00:06:31.323 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60086 00:06:31.323 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60086 ']' 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60086 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60086 00:06:31.324 killing process with pid 60086 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60086' 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60086 00:06:31.324 10:50:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60086 00:06:33.238 00:06:33.238 real 0m12.105s 00:06:33.238 user 0m12.428s 00:06:33.238 sys 0m1.469s 00:06:33.238 10:50:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.238 ************************************ 00:06:33.238 END TEST locking_app_on_unlocked_coremask 00:06:33.238 ************************************ 00:06:33.238 10:50:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.238 10:50:20 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.238 10:50:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.238 10:50:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.238 10:50:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.238 ************************************ 00:06:33.238 START TEST locking_app_on_locked_coremask 00:06:33.238 ************************************ 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60245 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60245 /var/tmp/spdk.sock 00:06:33.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60245 ']' 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.238 10:50:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.497 [2024-11-15 10:50:20.130876] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:33.497 [2024-11-15 10:50:20.131192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60245 ] 00:06:33.497 [2024-11-15 10:50:20.313456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.757 [2024-11-15 10:50:20.426317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60263 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60263 /var/tmp/spdk2.sock 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60263 /var/tmp/spdk2.sock 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:34.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60263 /var/tmp/spdk2.sock 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60263 ']' 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.693 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.693 [2024-11-15 10:50:21.389320] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:34.693 [2024-11-15 10:50:21.389640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60263 ] 00:06:34.956 [2024-11-15 10:50:21.571126] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60245 has claimed it. 00:06:34.956 [2024-11-15 10:50:21.571193] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.215 ERROR: process (pid: 60263) is no longer running 00:06:35.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60263) - No such process 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60245 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60245 00:06:35.215 10:50:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60245 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60245 ']' 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60245 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60245 00:06:35.784 killing process with pid 60245 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60245' 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60245 00:06:35.784 10:50:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60245 00:06:38.319 00:06:38.319 real 0m4.889s 00:06:38.319 user 0m5.037s 00:06:38.319 sys 0m0.879s 00:06:38.319 10:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.319 ************************************ 00:06:38.319 END TEST locking_app_on_locked_coremask 00:06:38.319 ************************************ 00:06:38.319 10:50:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.319 10:50:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:38.319 10:50:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.319 10:50:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.319 10:50:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.319 ************************************ 00:06:38.319 START TEST locking_overlapped_coremask 00:06:38.319 ************************************ 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60333 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:38.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60333 /var/tmp/spdk.sock 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60333 ']' 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.319 10:50:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.319 [2024-11-15 10:50:25.090005] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:38.319 [2024-11-15 10:50:25.090317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60333 ] 00:06:38.578 [2024-11-15 10:50:25.274199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.578 [2024-11-15 10:50:25.391471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.578 [2024-11-15 10:50:25.391611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.578 [2024-11-15 10:50:25.391643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60355 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60355 /var/tmp/spdk2.sock 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60355 /var/tmp/spdk2.sock 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60355 /var/tmp/spdk2.sock 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60355 ']' 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.515 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.515 [2024-11-15 10:50:26.369294] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:39.515 [2024-11-15 10:50:26.369618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60355 ] 00:06:39.775 [2024-11-15 10:50:26.555708] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60333 has claimed it. 00:06:39.775 [2024-11-15 10:50:26.555783] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.344 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60355) - No such process 00:06:40.344 ERROR: process (pid: 60355) is no longer running 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60333 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60333 ']' 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60333 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.344 10:50:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.344 10:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60333 00:06:40.344 killing process with pid 60333 00:06:40.344 10:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.344 10:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.344 10:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60333' 00:06:40.344 10:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60333 00:06:40.344 10:50:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60333 00:06:42.881 00:06:42.881 real 0m4.474s 00:06:42.881 user 0m12.101s 00:06:42.881 sys 0m0.613s 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.881 ************************************ 00:06:42.881 END TEST locking_overlapped_coremask 00:06:42.881 ************************************ 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 10:50:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.881 10:50:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.881 10:50:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.881 10:50:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 ************************************ 00:06:42.881 START TEST locking_overlapped_coremask_via_rpc 00:06:42.881 ************************************ 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60420 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60420 /var/tmp/spdk.sock 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60420 ']' 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.881 10:50:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.881 [2024-11-15 10:50:29.627703] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:42.881 [2024-11-15 10:50:29.627854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60420 ] 00:06:43.141 [2024-11-15 10:50:29.810429] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.141 [2024-11-15 10:50:29.810489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.141 [2024-11-15 10:50:29.946758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.141 [2024-11-15 10:50:29.946887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.141 [2024-11-15 10:50:29.946913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60438 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60438 /var/tmp/spdk2.sock 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60438 ']' 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.077 10:50:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.354 [2024-11-15 10:50:30.991108] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:44.354 [2024-11-15 10:50:30.991271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60438 ] 00:06:44.354 [2024-11-15 10:50:31.189345] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:44.354 [2024-11-15 10:50:31.189397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.626 [2024-11-15 10:50:31.430330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.626 [2024-11-15 10:50:31.433628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.626 [2024-11-15 10:50:31.433664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.160 [2024-11-15 10:50:33.635775] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60420 has claimed it. 00:06:47.160 request: 00:06:47.160 { 00:06:47.160 "method": "framework_enable_cpumask_locks", 00:06:47.160 "req_id": 1 00:06:47.160 } 00:06:47.160 Got JSON-RPC error response 00:06:47.160 response: 00:06:47.160 { 00:06:47.160 "code": -32603, 00:06:47.160 "message": "Failed to claim CPU core: 2" 00:06:47.160 } 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60420 /var/tmp/spdk.sock 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60420 ']' 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60438 /var/tmp/spdk2.sock 00:06:47.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60438 ']' 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.160 10:50:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:47.419 ************************************ 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:47.419 00:06:47.419 real 0m4.616s 00:06:47.419 user 0m1.320s 00:06:47.419 sys 0m0.281s 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.419 10:50:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.419 END TEST locking_overlapped_coremask_via_rpc 00:06:47.419 ************************************ 00:06:47.419 10:50:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:47.419 10:50:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60420 ]] 00:06:47.419 10:50:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60420 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60420 ']' 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60420 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60420 00:06:47.419 killing process with pid 60420 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60420' 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60420 00:06:47.419 10:50:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60420 00:06:49.956 10:50:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60438 ]] 00:06:49.956 10:50:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60438 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60438 ']' 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60438 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60438 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:49.956 killing process with pid 60438 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60438' 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60438 00:06:49.956 10:50:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60438 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60420 ]] 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60420 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60420 ']' 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60420 00:06:52.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60420) - No such process 00:06:52.494 Process with pid 60420 is not found 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60420 is not found' 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60438 ]] 00:06:52.494 Process with pid 60438 is not found 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60438 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60438 ']' 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60438 00:06:52.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60438) - No such process 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60438 is not found' 00:06:52.494 10:50:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:52.494 00:06:52.494 real 0m52.139s 00:06:52.494 user 1m28.381s 00:06:52.494 sys 0m7.388s 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.494 10:50:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.494 ************************************ 00:06:52.494 END TEST cpu_locks 00:06:52.494 ************************************ 00:06:52.494 ************************************ 00:06:52.494 END TEST event 00:06:52.494 ************************************ 00:06:52.494 00:06:52.494 real 1m22.179s 00:06:52.494 user 2m25.434s 00:06:52.494 sys 0m11.839s 00:06:52.494 10:50:39 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.494 10:50:39 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.494 10:50:39 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.494 10:50:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.494 10:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.494 10:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:52.494 ************************************ 00:06:52.494 START TEST thread 00:06:52.494 ************************************ 00:06:52.494 10:50:39 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:52.754 * Looking for test storage... 00:06:52.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.754 10:50:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.754 10:50:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.754 10:50:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.754 10:50:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.754 10:50:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.754 10:50:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.754 10:50:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.754 10:50:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.754 10:50:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.754 10:50:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.754 10:50:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.754 10:50:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:52.754 10:50:39 thread -- scripts/common.sh@345 -- # : 1 00:06:52.754 10:50:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.754 10:50:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.754 10:50:39 thread -- scripts/common.sh@365 -- # decimal 1 00:06:52.754 10:50:39 thread -- scripts/common.sh@353 -- # local d=1 00:06:52.754 10:50:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.754 10:50:39 thread -- scripts/common.sh@355 -- # echo 1 00:06:52.754 10:50:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.754 10:50:39 thread -- scripts/common.sh@366 -- # decimal 2 00:06:52.754 10:50:39 thread -- scripts/common.sh@353 -- # local d=2 00:06:52.754 10:50:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.754 10:50:39 thread -- scripts/common.sh@355 -- # echo 2 00:06:52.754 10:50:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.754 10:50:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.754 10:50:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.754 10:50:39 thread -- scripts/common.sh@368 -- # return 0 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.754 --rc genhtml_branch_coverage=1 00:06:52.754 --rc genhtml_function_coverage=1 00:06:52.754 --rc genhtml_legend=1 00:06:52.754 --rc geninfo_all_blocks=1 00:06:52.754 --rc geninfo_unexecuted_blocks=1 00:06:52.754 00:06:52.754 ' 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.754 --rc genhtml_branch_coverage=1 00:06:52.754 --rc genhtml_function_coverage=1 00:06:52.754 --rc genhtml_legend=1 00:06:52.754 --rc geninfo_all_blocks=1 00:06:52.754 --rc geninfo_unexecuted_blocks=1 00:06:52.754 00:06:52.754 ' 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.754 --rc genhtml_branch_coverage=1 00:06:52.754 --rc genhtml_function_coverage=1 00:06:52.754 --rc genhtml_legend=1 00:06:52.754 --rc geninfo_all_blocks=1 00:06:52.754 --rc geninfo_unexecuted_blocks=1 00:06:52.754 00:06:52.754 ' 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.754 --rc genhtml_branch_coverage=1 00:06:52.754 --rc genhtml_function_coverage=1 00:06:52.754 --rc genhtml_legend=1 00:06:52.754 --rc geninfo_all_blocks=1 00:06:52.754 --rc geninfo_unexecuted_blocks=1 00:06:52.754 00:06:52.754 ' 00:06:52.754 10:50:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.754 10:50:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:52.754 ************************************ 00:06:52.754 START TEST thread_poller_perf 00:06:52.754 ************************************ 00:06:52.754 10:50:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:52.754 [2024-11-15 10:50:39.532914] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:52.754 [2024-11-15 10:50:39.533136] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:06:53.014 [2024-11-15 10:50:39.715859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.014 [2024-11-15 10:50:39.831758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.014 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.391 [2024-11-15T10:50:41.252Z] ====================================== 00:06:54.391 [2024-11-15T10:50:41.252Z] busy:2497671462 (cyc) 00:06:54.391 [2024-11-15T10:50:41.252Z] total_run_count: 389000 00:06:54.391 [2024-11-15T10:50:41.252Z] tsc_hz: 2490000000 (cyc) 00:06:54.391 [2024-11-15T10:50:41.252Z] ====================================== 00:06:54.391 [2024-11-15T10:50:41.252Z] poller_cost: 6420 (cyc), 2578 (nsec) 00:06:54.391 00:06:54.391 real 0m1.582s 00:06:54.391 user 0m1.362s 00:06:54.391 sys 0m0.111s 00:06:54.391 10:50:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.391 10:50:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.391 ************************************ 00:06:54.391 END TEST thread_poller_perf 00:06:54.391 ************************************ 00:06:54.391 10:50:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.391 10:50:41 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:54.391 10:50:41 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.391 10:50:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.391 ************************************ 00:06:54.391 START TEST thread_poller_perf 00:06:54.391 ************************************ 00:06:54.391 10:50:41 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.392 [2024-11-15 10:50:41.195001] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:54.392 [2024-11-15 10:50:41.195105] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60671 ] 00:06:54.649 [2024-11-15 10:50:41.378130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.649 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.649 [2024-11-15 10:50:41.492718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.026 [2024-11-15T10:50:42.887Z] ====================================== 00:06:56.026 [2024-11-15T10:50:42.887Z] busy:2493763340 (cyc) 00:06:56.026 [2024-11-15T10:50:42.887Z] total_run_count: 5082000 00:06:56.026 [2024-11-15T10:50:42.887Z] tsc_hz: 2490000000 (cyc) 00:06:56.026 [2024-11-15T10:50:42.887Z] ====================================== 00:06:56.026 [2024-11-15T10:50:42.887Z] poller_cost: 490 (cyc), 196 (nsec) 00:06:56.026 00:06:56.026 real 0m1.581s 00:06:56.026 user 0m1.362s 00:06:56.026 sys 0m0.109s 00:06:56.026 ************************************ 00:06:56.026 END TEST thread_poller_perf 00:06:56.026 ************************************ 00:06:56.026 10:50:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.026 10:50:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.026 10:50:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.026 ************************************ 00:06:56.026 END TEST thread 00:06:56.026 ************************************ 00:06:56.026 00:06:56.026 real 0m3.534s 00:06:56.026 user 0m2.903s 00:06:56.026 sys 0m0.421s 00:06:56.026 10:50:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.026 10:50:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.026 10:50:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:56.026 10:50:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:56.026 10:50:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:56.026 10:50:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:56.026 10:50:42 -- common/autotest_common.sh@10 -- # set +x 00:06:56.026 ************************************ 00:06:56.026 START TEST app_cmdline 00:06:56.026 ************************************ 00:06:56.026 10:50:42 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:56.285 * Looking for test storage... 00:06:56.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:56.285 10:50:42 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:56.285 10:50:42 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:56.285 10:50:42 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:56.285 10:50:43 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:56.285 10:50:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:56.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.286 10:50:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.286 --rc genhtml_branch_coverage=1 00:06:56.286 --rc genhtml_function_coverage=1 00:06:56.286 --rc genhtml_legend=1 00:06:56.286 --rc geninfo_all_blocks=1 00:06:56.286 --rc geninfo_unexecuted_blocks=1 00:06:56.286 00:06:56.286 ' 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.286 --rc genhtml_branch_coverage=1 00:06:56.286 --rc genhtml_function_coverage=1 00:06:56.286 --rc genhtml_legend=1 00:06:56.286 --rc geninfo_all_blocks=1 00:06:56.286 --rc geninfo_unexecuted_blocks=1 00:06:56.286 00:06:56.286 ' 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.286 --rc genhtml_branch_coverage=1 00:06:56.286 --rc genhtml_function_coverage=1 00:06:56.286 --rc genhtml_legend=1 00:06:56.286 --rc geninfo_all_blocks=1 00:06:56.286 --rc geninfo_unexecuted_blocks=1 00:06:56.286 00:06:56.286 ' 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:56.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:56.286 --rc genhtml_branch_coverage=1 00:06:56.286 --rc genhtml_function_coverage=1 00:06:56.286 --rc genhtml_legend=1 00:06:56.286 --rc geninfo_all_blocks=1 00:06:56.286 --rc geninfo_unexecuted_blocks=1 00:06:56.286 00:06:56.286 ' 00:06:56.286 10:50:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:56.286 10:50:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60760 00:06:56.286 10:50:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60760 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60760 ']' 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.286 10:50:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:56.286 10:50:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:56.545 [2024-11-15 10:50:43.185662] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:06:56.545 [2024-11-15 10:50:43.185963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60760 ] 00:06:56.545 [2024-11-15 10:50:43.368106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.804 [2024-11-15 10:50:43.482245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:57.742 { 00:06:57.742 "version": "SPDK v25.01-pre git sha1 f1a181ac3", 00:06:57.742 "fields": { 00:06:57.742 "major": 25, 00:06:57.742 "minor": 1, 00:06:57.742 "patch": 0, 00:06:57.742 "suffix": "-pre", 00:06:57.742 "commit": "f1a181ac3" 00:06:57.742 } 00:06:57.742 } 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:57.742 10:50:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:57.742 10:50:44 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:58.000 request: 00:06:58.000 { 00:06:58.000 "method": "env_dpdk_get_mem_stats", 00:06:58.000 "req_id": 1 00:06:58.000 } 00:06:58.000 Got JSON-RPC error response 00:06:58.000 response: 00:06:58.000 { 00:06:58.000 "code": -32601, 00:06:58.000 "message": "Method not found" 00:06:58.000 } 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.000 10:50:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60760 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60760 ']' 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60760 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60760 00:06:58.000 killing process with pid 60760 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60760' 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 60760 00:06:58.000 10:50:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 60760 00:07:00.536 00:07:00.536 real 0m4.368s 00:07:00.536 user 0m4.523s 00:07:00.536 sys 0m0.655s 00:07:00.536 10:50:47 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.536 ************************************ 00:07:00.536 END TEST app_cmdline 00:07:00.536 ************************************ 00:07:00.536 10:50:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:00.536 10:50:47 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.536 10:50:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.536 10:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.536 10:50:47 -- common/autotest_common.sh@10 -- # set +x 00:07:00.536 ************************************ 00:07:00.536 START TEST version 00:07:00.536 ************************************ 00:07:00.536 10:50:47 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:00.796 * Looking for test storage... 00:07:00.796 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.796 10:50:47 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.796 10:50:47 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.796 10:50:47 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.796 10:50:47 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.796 10:50:47 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.796 10:50:47 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.796 10:50:47 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.796 10:50:47 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.796 10:50:47 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.796 10:50:47 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.796 10:50:47 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.796 10:50:47 version -- scripts/common.sh@344 -- # case "$op" in 00:07:00.796 10:50:47 version -- scripts/common.sh@345 -- # : 1 00:07:00.796 10:50:47 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.796 10:50:47 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.796 10:50:47 version -- scripts/common.sh@365 -- # decimal 1 00:07:00.796 10:50:47 version -- scripts/common.sh@353 -- # local d=1 00:07:00.796 10:50:47 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.796 10:50:47 version -- scripts/common.sh@355 -- # echo 1 00:07:00.796 10:50:47 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.796 10:50:47 version -- scripts/common.sh@366 -- # decimal 2 00:07:00.796 10:50:47 version -- scripts/common.sh@353 -- # local d=2 00:07:00.796 10:50:47 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.796 10:50:47 version -- scripts/common.sh@355 -- # echo 2 00:07:00.796 10:50:47 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.796 10:50:47 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.796 10:50:47 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.796 10:50:47 version -- scripts/common.sh@368 -- # return 0 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.796 --rc genhtml_branch_coverage=1 00:07:00.796 --rc genhtml_function_coverage=1 00:07:00.796 --rc genhtml_legend=1 00:07:00.796 --rc geninfo_all_blocks=1 00:07:00.796 --rc geninfo_unexecuted_blocks=1 00:07:00.796 00:07:00.796 ' 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.796 --rc genhtml_branch_coverage=1 00:07:00.796 --rc genhtml_function_coverage=1 00:07:00.796 --rc genhtml_legend=1 00:07:00.796 --rc geninfo_all_blocks=1 00:07:00.796 --rc geninfo_unexecuted_blocks=1 00:07:00.796 00:07:00.796 ' 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.796 --rc genhtml_branch_coverage=1 00:07:00.796 --rc genhtml_function_coverage=1 00:07:00.796 --rc genhtml_legend=1 00:07:00.796 --rc geninfo_all_blocks=1 00:07:00.796 --rc geninfo_unexecuted_blocks=1 00:07:00.796 00:07:00.796 ' 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.796 --rc genhtml_branch_coverage=1 00:07:00.796 --rc genhtml_function_coverage=1 00:07:00.796 --rc genhtml_legend=1 00:07:00.796 --rc geninfo_all_blocks=1 00:07:00.796 --rc geninfo_unexecuted_blocks=1 00:07:00.796 00:07:00.796 ' 00:07:00.796 10:50:47 version -- app/version.sh@17 -- # get_header_version major 00:07:00.796 10:50:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # cut -f2 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.796 10:50:47 version -- app/version.sh@17 -- # major=25 00:07:00.796 10:50:47 version -- app/version.sh@18 -- # get_header_version minor 00:07:00.796 10:50:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # cut -f2 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.796 10:50:47 version -- app/version.sh@18 -- # minor=1 00:07:00.796 10:50:47 version -- app/version.sh@19 -- # get_header_version patch 00:07:00.796 10:50:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # cut -f2 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.796 10:50:47 version -- app/version.sh@19 -- # patch=0 00:07:00.796 10:50:47 version -- app/version.sh@20 -- # get_header_version suffix 00:07:00.796 10:50:47 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # cut -f2 00:07:00.796 10:50:47 version -- app/version.sh@14 -- # tr -d '"' 00:07:00.796 10:50:47 version -- app/version.sh@20 -- # suffix=-pre 00:07:00.796 10:50:47 version -- app/version.sh@22 -- # version=25.1 00:07:00.796 10:50:47 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:00.796 10:50:47 version -- app/version.sh@28 -- # version=25.1rc0 00:07:00.796 10:50:47 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:00.796 10:50:47 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:00.796 10:50:47 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:00.796 10:50:47 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:00.796 ************************************ 00:07:00.796 END TEST version 00:07:00.796 ************************************ 00:07:00.796 00:07:00.796 real 0m0.347s 00:07:00.796 user 0m0.186s 00:07:00.796 sys 0m0.208s 00:07:00.796 10:50:47 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.796 10:50:47 version -- common/autotest_common.sh@10 -- # set +x 00:07:01.056 10:50:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:01.056 10:50:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:01.056 10:50:47 -- spdk/autotest.sh@194 -- # uname -s 00:07:01.056 10:50:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:01.056 10:50:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.056 10:50:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:01.056 10:50:47 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:01.056 10:50:47 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:01.056 10:50:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.056 10:50:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.056 10:50:47 -- common/autotest_common.sh@10 -- # set +x 00:07:01.056 ************************************ 00:07:01.056 START TEST blockdev_nvme 00:07:01.056 ************************************ 00:07:01.056 10:50:47 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:01.056 * Looking for test storage... 00:07:01.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:01.056 10:50:47 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.056 10:50:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.056 10:50:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.315 10:50:47 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.315 10:50:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:01.315 10:50:47 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.315 10:50:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.315 --rc genhtml_branch_coverage=1 00:07:01.315 --rc genhtml_function_coverage=1 00:07:01.315 --rc genhtml_legend=1 00:07:01.315 --rc geninfo_all_blocks=1 00:07:01.315 --rc geninfo_unexecuted_blocks=1 00:07:01.315 00:07:01.315 ' 00:07:01.315 10:50:47 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.315 --rc genhtml_branch_coverage=1 00:07:01.315 --rc genhtml_function_coverage=1 00:07:01.315 --rc genhtml_legend=1 00:07:01.315 --rc geninfo_all_blocks=1 00:07:01.315 --rc geninfo_unexecuted_blocks=1 00:07:01.315 00:07:01.315 ' 00:07:01.315 10:50:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.315 --rc genhtml_branch_coverage=1 00:07:01.315 --rc genhtml_function_coverage=1 00:07:01.315 --rc genhtml_legend=1 00:07:01.315 --rc geninfo_all_blocks=1 00:07:01.315 --rc geninfo_unexecuted_blocks=1 00:07:01.315 00:07:01.315 ' 00:07:01.315 10:50:47 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.315 --rc genhtml_branch_coverage=1 00:07:01.315 --rc genhtml_function_coverage=1 00:07:01.315 --rc genhtml_legend=1 00:07:01.315 --rc geninfo_all_blocks=1 00:07:01.315 --rc geninfo_unexecuted_blocks=1 00:07:01.315 00:07:01.315 ' 00:07:01.315 10:50:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:01.315 10:50:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:01.315 10:50:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:01.315 10:50:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60954 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60954 00:07:01.316 10:50:47 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:01.316 10:50:47 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60954 ']' 00:07:01.316 10:50:47 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.316 10:50:47 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.316 10:50:47 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.316 10:50:47 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.316 10:50:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.316 [2024-11-15 10:50:48.075377] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:01.316 [2024-11-15 10:50:48.075735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60954 ] 00:07:01.575 [2024-11-15 10:50:48.258462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.575 [2024-11-15 10:50:48.380588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.574 10:50:49 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.574 10:50:49 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:02.574 10:50:49 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:02.574 10:50:49 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:02.574 10:50:49 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:02.574 10:50:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:02.574 10:50:49 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:02.574 10:50:49 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:02.574 10:50:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.574 10:50:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.833 10:50:49 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.833 10:50:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:02.833 10:50:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.833 10:50:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.833 10:50:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.093 10:50:49 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.093 10:50:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:03.093 10:50:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:03.093 10:50:49 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.093 10:50:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.093 10:50:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:03.094 10:50:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "014c4e99-7961-4ba8-8d95-66cdcc6a9603"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "014c4e99-7961-4ba8-8d95-66cdcc6a9603",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2aacbbeb-8d37-4351-8d5b-6dde2e8861f2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2aacbbeb-8d37-4351-8d5b-6dde2e8861f2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4f8e7349-5c94-4152-8415-0b2e47cb49bb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4f8e7349-5c94-4152-8415-0b2e47cb49bb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "3bf48c9c-a790-404a-a476-431d41cfa716"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3bf48c9c-a790-404a-a476-431d41cfa716",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fa80c476-6a7d-4070-9d63-b9e72f81035e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fa80c476-6a7d-4070-9d63-b9e72f81035e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "5af6e8b7-435b-465c-b382-6da0f51c7b25"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "5af6e8b7-435b-465c-b382-6da0f51c7b25",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:03.094 10:50:49 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:03.094 10:50:49 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:03.094 10:50:49 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:03.094 10:50:49 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:03.094 10:50:49 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60954 00:07:03.094 10:50:49 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60954 ']' 00:07:03.094 10:50:49 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60954 00:07:03.094 10:50:49 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:03.094 10:50:49 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.094 10:50:49 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60954 00:07:03.354 killing process with pid 60954 00:07:03.354 10:50:49 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.354 10:50:49 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.354 10:50:49 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60954' 00:07:03.354 10:50:49 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60954 00:07:03.354 10:50:49 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60954 00:07:05.889 10:50:52 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:05.889 10:50:52 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.889 10:50:52 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:05.889 10:50:52 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.889 10:50:52 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:05.889 ************************************ 00:07:05.889 START TEST bdev_hello_world 00:07:05.889 ************************************ 00:07:05.889 10:50:52 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:05.889 [2024-11-15 10:50:52.444325] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:05.889 [2024-11-15 10:50:52.444455] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61049 ] 00:07:05.889 [2024-11-15 10:50:52.625368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.889 [2024-11-15 10:50:52.740773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.826 [2024-11-15 10:50:53.398637] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:06.826 [2024-11-15 10:50:53.398688] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:06.826 [2024-11-15 10:50:53.398737] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:06.826 [2024-11-15 10:50:53.402064] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:06.826 [2024-11-15 10:50:53.402744] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:06.826 [2024-11-15 10:50:53.402889] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:06.826 [2024-11-15 10:50:53.403166] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:06.826 00:07:06.826 [2024-11-15 10:50:53.403423] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:07.763 00:07:07.763 ************************************ 00:07:07.763 END TEST bdev_hello_world 00:07:07.763 ************************************ 00:07:07.763 real 0m2.159s 00:07:07.763 user 0m1.806s 00:07:07.763 sys 0m0.242s 00:07:07.763 10:50:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.764 10:50:54 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:07.764 10:50:54 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:07.764 10:50:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.764 10:50:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.764 10:50:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:07.764 ************************************ 00:07:07.764 START TEST bdev_bounds 00:07:07.764 ************************************ 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61091 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:07.764 Process bdevio pid: 61091 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61091' 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61091 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61091 ']' 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.764 10:50:54 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:08.023 [2024-11-15 10:50:54.680898] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:08.023 [2024-11-15 10:50:54.681212] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61091 ] 00:07:08.023 [2024-11-15 10:50:54.862763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.282 [2024-11-15 10:50:54.985207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.282 [2024-11-15 10:50:54.985299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.282 [2024-11-15 10:50:54.985330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:09.218 10:50:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.218 10:50:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:09.218 10:50:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:09.218 I/O targets: 00:07:09.218 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:09.218 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:09.218 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.218 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.218 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:09.218 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:09.218 00:07:09.218 00:07:09.218 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.218 http://cunit.sourceforge.net/ 00:07:09.218 00:07:09.218 00:07:09.218 Suite: bdevio tests on: Nvme3n1 00:07:09.218 Test: blockdev write read block ...passed 00:07:09.218 Test: blockdev write zeroes read block ...passed 00:07:09.218 Test: blockdev write zeroes read no split ...passed 00:07:09.218 Test: blockdev write zeroes read split ...passed 00:07:09.218 Test: blockdev write zeroes read split partial ...passed 00:07:09.218 Test: blockdev reset ...[2024-11-15 10:50:55.897442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:09.218 [2024-11-15 10:50:55.901484] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:07:09.218 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.218 passed 00:07:09.218 Test: blockdev write read size > 128k ...passed 00:07:09.218 Test: blockdev write read invalid size ...passed 00:07:09.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.218 Test: blockdev write read max offset ...passed 00:07:09.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.218 Test: blockdev writev readv 8 blocks ...passed 00:07:09.218 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.218 Test: blockdev writev readv block ...passed 00:07:09.218 Test: blockdev writev readv size > 128k ...passed 00:07:09.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.218 Test: blockdev comparev and writev ...[2024-11-15 10:50:55.911754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b100a000 len:0x1000 00:07:09.218 [2024-11-15 10:50:55.911948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.218 passed 00:07:09.218 Test: blockdev nvme passthru rw ...passed 00:07:09.218 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:55.913101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.218 [2024-11-15 10:50:55.913264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.218 passed 00:07:09.218 Test: blockdev nvme admin passthru ...passed 00:07:09.218 Test: blockdev copy ...passed 00:07:09.218 Suite: bdevio tests on: Nvme2n3 00:07:09.218 Test: blockdev write read block ...passed 00:07:09.218 Test: blockdev write zeroes read block ...passed 00:07:09.218 Test: blockdev write zeroes read no split ...passed 00:07:09.218 Test: blockdev write zeroes read split ...passed 00:07:09.218 Test: blockdev write zeroes read split partial ...passed 00:07:09.218 Test: blockdev reset ...[2024-11-15 10:50:55.990309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:09.218 [2024-11-15 10:50:55.994838] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:09.218 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.218 passed 00:07:09.218 Test: blockdev write read size > 128k ...passed 00:07:09.218 Test: blockdev write read invalid size ...passed 00:07:09.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.218 Test: blockdev write read max offset ...passed 00:07:09.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.218 Test: blockdev writev readv 8 blocks ...passed 00:07:09.218 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.218 Test: blockdev writev readv block ...passed 00:07:09.218 Test: blockdev writev readv size > 128k ...passed 00:07:09.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.219 Test: blockdev comparev and writev ...[2024-11-15 10:50:56.004301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x294a06000 len:0x1000 00:07:09.219 [2024-11-15 10:50:56.004351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.219 passed 00:07:09.219 Test: blockdev nvme passthru rw ...passed 00:07:09.219 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:56.005340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.219 passed 00:07:09.219 Test: blockdev nvme admin passthru ...[2024-11-15 10:50:56.005375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.219 passed 00:07:09.219 Test: blockdev copy ...passed 00:07:09.219 Suite: bdevio tests on: Nvme2n2 00:07:09.219 Test: blockdev write read block ...passed 00:07:09.219 Test: blockdev write zeroes read block ...passed 00:07:09.219 Test: blockdev write zeroes read no split ...passed 00:07:09.219 Test: blockdev write zeroes read split ...passed 00:07:09.478 Test: blockdev write zeroes read split partial ...passed 00:07:09.478 Test: blockdev reset ...[2024-11-15 10:50:56.083042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:09.478 [2024-11-15 10:50:56.087024] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:09.478 passed 00:07:09.478 Test: blockdev write read 8 blocks ...passed 00:07:09.478 Test: blockdev write read size > 128k ...passed 00:07:09.478 Test: blockdev write read invalid size ...passed 00:07:09.478 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.478 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.478 Test: blockdev write read max offset ...passed 00:07:09.478 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.478 Test: blockdev writev readv 8 blocks ...passed 00:07:09.478 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.478 Test: blockdev writev readv block ...passed 00:07:09.478 Test: blockdev writev readv size > 128k ...passed 00:07:09.478 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.478 Test: blockdev comparev and writev ...[2024-11-15 10:50:56.096242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc83c000 len:0x1000 00:07:09.478 [2024-11-15 10:50:56.096301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.478 passed 00:07:09.478 Test: blockdev nvme passthru rw ...passed 00:07:09.478 Test: blockdev nvme passthru vendor specific ...passed 00:07:09.478 Test: blockdev nvme admin passthru ...[2024-11-15 10:50:56.097151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.479 [2024-11-15 10:50:56.097190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.479 passed 00:07:09.479 Test: blockdev copy ...passed 00:07:09.479 Suite: bdevio tests on: Nvme2n1 00:07:09.479 Test: blockdev write read block ...passed 00:07:09.479 Test: blockdev write zeroes read block ...passed 00:07:09.479 Test: blockdev write zeroes read no split ...passed 00:07:09.479 Test: blockdev write zeroes read split ...passed 00:07:09.479 Test: blockdev write zeroes read split partial ...passed 00:07:09.479 Test: blockdev reset ...[2024-11-15 10:50:56.176789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:09.479 [2024-11-15 10:50:56.181050] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:09.479 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.479 passed 00:07:09.479 Test: blockdev write read size > 128k ...passed 00:07:09.479 Test: blockdev write read invalid size ...passed 00:07:09.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.479 Test: blockdev write read max offset ...passed 00:07:09.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.479 Test: blockdev writev readv 8 blocks ...passed 00:07:09.479 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.479 Test: blockdev writev readv block ...passed 00:07:09.479 Test: blockdev writev readv size > 128k ...passed 00:07:09.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.479 Test: blockdev comparev and writev ...[2024-11-15 10:50:56.191975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc838000 len:0x1000 00:07:09.479 [2024-11-15 10:50:56.192151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.479 passed 00:07:09.479 Test: blockdev nvme passthru rw ...passed 00:07:09.479 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:56.193577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PpassedRP2 0x0 00:07:09.479 [2024-11-15 10:50:56.193720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.479 00:07:09.479 Test: blockdev nvme admin passthru ...passed 00:07:09.479 Test: blockdev copy ...passed 00:07:09.479 Suite: bdevio tests on: Nvme1n1 00:07:09.479 Test: blockdev write read block ...passed 00:07:09.479 Test: blockdev write zeroes read block ...passed 00:07:09.479 Test: blockdev write zeroes read no split ...passed 00:07:09.479 Test: blockdev write zeroes read split ...passed 00:07:09.479 Test: blockdev write zeroes read split partial ...passed 00:07:09.479 Test: blockdev reset ...[2024-11-15 10:50:56.271324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:09.479 [2024-11-15 10:50:56.275054] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:09.479 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.479 passed 00:07:09.479 Test: blockdev write read size > 128k ...passed 00:07:09.479 Test: blockdev write read invalid size ...passed 00:07:09.479 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.479 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.479 Test: blockdev write read max offset ...passed 00:07:09.479 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.479 Test: blockdev writev readv 8 blocks ...passed 00:07:09.479 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.479 Test: blockdev writev readv block ...passed 00:07:09.479 Test: blockdev writev readv size > 128k ...passed 00:07:09.479 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.479 Test: blockdev comparev and writev ...[2024-11-15 10:50:56.284357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cc834000 len:0x1000 00:07:09.479 [2024-11-15 10:50:56.284417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:09.479 passed 00:07:09.479 Test: blockdev nvme passthru rw ...passed 00:07:09.479 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:56.285341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:09.479 [2024-11-15 10:50:56.285381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:09.479 passed 00:07:09.479 Test: blockdev nvme admin passthru ...passed 00:07:09.479 Test: blockdev copy ...passed 00:07:09.479 Suite: bdevio tests on: Nvme0n1 00:07:09.479 Test: blockdev write read block ...passed 00:07:09.479 Test: blockdev write zeroes read block ...passed 00:07:09.479 Test: blockdev write zeroes read no split ...passed 00:07:09.738 Test: blockdev write zeroes read split ...passed 00:07:09.738 Test: blockdev write zeroes read split partial ...passed 00:07:09.738 Test: blockdev reset ...[2024-11-15 10:50:56.369207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:09.738 [2024-11-15 10:50:56.373184] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:07:09.738 Test: blockdev write read 8 blocks ...uccessful. 00:07:09.738 passed 00:07:09.738 Test: blockdev write read size > 128k ...passed 00:07:09.738 Test: blockdev write read invalid size ...passed 00:07:09.738 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:09.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:09.738 Test: blockdev write read max offset ...passed 00:07:09.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:09.738 Test: blockdev writev readv 8 blocks ...passed 00:07:09.738 Test: blockdev writev readv 30 x 1block ...passed 00:07:09.738 Test: blockdev writev readv block ...passed 00:07:09.738 Test: blockdev writev readv size > 128k ...passed 00:07:09.738 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:09.738 Test: blockdev comparev and writev ...[2024-11-15 10:50:56.383260] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:09.738 separate metadata which is not supported yet. 00:07:09.738 passed 00:07:09.738 Test: blockdev nvme passthru rw ...passed 00:07:09.739 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:50:56.384472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:09.739 [2024-11-15 10:50:56.384658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:09.739 passed 00:07:09.739 Test: blockdev nvme admin passthru ...passed 00:07:09.739 Test: blockdev copy ...passed 00:07:09.739 00:07:09.739 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.739 suites 6 6 n/a 0 0 00:07:09.739 tests 138 138 138 0 0 00:07:09.739 asserts 893 893 893 0 n/a 00:07:09.739 00:07:09.739 Elapsed time = 1.516 seconds 00:07:09.739 0 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61091 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61091 ']' 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61091 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61091 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.739 killing process with pid 61091 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61091' 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61091 00:07:09.739 10:50:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61091 00:07:10.678 10:50:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:10.678 00:07:10.678 real 0m2.917s 00:07:10.678 user 0m7.507s 00:07:10.678 sys 0m0.398s 00:07:10.678 10:50:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.678 10:50:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:10.678 ************************************ 00:07:10.678 END TEST bdev_bounds 00:07:10.678 ************************************ 00:07:10.966 10:50:57 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:10.966 10:50:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:10.966 10:50:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.966 10:50:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.966 ************************************ 00:07:10.966 START TEST bdev_nbd 00:07:10.966 ************************************ 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61156 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61156 /var/tmp/spdk-nbd.sock 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61156 ']' 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.966 10:50:57 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:10.966 [2024-11-15 10:50:57.686940] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:10.966 [2024-11-15 10:50:57.687120] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.237 [2024-11-15 10:50:57.869772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.237 [2024-11-15 10:50:57.990244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.174 1+0 records in 00:07:12.174 1+0 records out 00:07:12.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476655 s, 8.6 MB/s 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.174 10:50:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.433 1+0 records in 00:07:12.433 1+0 records out 00:07:12.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000768273 s, 5.3 MB/s 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.433 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.692 1+0 records in 00:07:12.692 1+0 records out 00:07:12.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498437 s, 8.2 MB/s 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.692 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.952 1+0 records in 00:07:12.952 1+0 records out 00:07:12.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721037 s, 5.7 MB/s 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:12.952 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.212 1+0 records in 00:07:13.212 1+0 records out 00:07:13.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758562 s, 5.4 MB/s 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.212 10:50:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.471 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:13.472 1+0 records in 00:07:13.472 1+0 records out 00:07:13.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896182 s, 4.6 MB/s 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:13.472 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.731 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:13.731 { 00:07:13.731 "nbd_device": "/dev/nbd0", 00:07:13.731 "bdev_name": "Nvme0n1" 00:07:13.731 }, 00:07:13.731 { 00:07:13.731 "nbd_device": "/dev/nbd1", 00:07:13.731 "bdev_name": "Nvme1n1" 00:07:13.731 }, 00:07:13.731 { 00:07:13.731 "nbd_device": "/dev/nbd2", 00:07:13.731 "bdev_name": "Nvme2n1" 00:07:13.731 }, 00:07:13.731 { 00:07:13.731 "nbd_device": "/dev/nbd3", 00:07:13.731 "bdev_name": "Nvme2n2" 00:07:13.731 }, 00:07:13.731 { 00:07:13.731 "nbd_device": "/dev/nbd4", 00:07:13.731 "bdev_name": "Nvme2n3" 00:07:13.731 }, 00:07:13.731 { 00:07:13.731 "nbd_device": "/dev/nbd5", 00:07:13.731 "bdev_name": "Nvme3n1" 00:07:13.731 } 00:07:13.731 ]' 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:13.732 { 00:07:13.732 "nbd_device": "/dev/nbd0", 00:07:13.732 "bdev_name": "Nvme0n1" 00:07:13.732 }, 00:07:13.732 { 00:07:13.732 "nbd_device": "/dev/nbd1", 00:07:13.732 "bdev_name": "Nvme1n1" 00:07:13.732 }, 00:07:13.732 { 00:07:13.732 "nbd_device": "/dev/nbd2", 00:07:13.732 "bdev_name": "Nvme2n1" 00:07:13.732 }, 00:07:13.732 { 00:07:13.732 "nbd_device": "/dev/nbd3", 00:07:13.732 "bdev_name": "Nvme2n2" 00:07:13.732 }, 00:07:13.732 { 00:07:13.732 "nbd_device": "/dev/nbd4", 00:07:13.732 "bdev_name": "Nvme2n3" 00:07:13.732 }, 00:07:13.732 { 00:07:13.732 "nbd_device": "/dev/nbd5", 00:07:13.732 "bdev_name": "Nvme3n1" 00:07:13.732 } 00:07:13.732 ]' 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.732 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.991 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.250 10:51:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:14.510 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.769 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:15.028 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.029 10:51:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:15.288 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:15.548 /dev/nbd0 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.548 1+0 records in 00:07:15.548 1+0 records out 00:07:15.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906543 s, 4.5 MB/s 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:15.548 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:15.807 /dev/nbd1 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.807 1+0 records in 00:07:15.807 1+0 records out 00:07:15.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136084 s, 3.0 MB/s 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:15.807 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:16.067 /dev/nbd10 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.067 1+0 records in 00:07:16.067 1+0 records out 00:07:16.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098509 s, 4.2 MB/s 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.067 10:51:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:16.326 /dev/nbd11 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.326 1+0 records in 00:07:16.326 1+0 records out 00:07:16.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691743 s, 5.9 MB/s 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.326 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.327 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.327 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.327 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:16.586 /dev/nbd12 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.586 1+0 records in 00:07:16.586 1+0 records out 00:07:16.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00104888 s, 3.9 MB/s 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.586 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:16.845 /dev/nbd13 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:16.845 1+0 records in 00:07:16.845 1+0 records out 00:07:16.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641138 s, 6.4 MB/s 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.845 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.104 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd0", 00:07:17.104 "bdev_name": "Nvme0n1" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd1", 00:07:17.104 "bdev_name": "Nvme1n1" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd10", 00:07:17.104 "bdev_name": "Nvme2n1" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd11", 00:07:17.104 "bdev_name": "Nvme2n2" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd12", 00:07:17.104 "bdev_name": "Nvme2n3" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd13", 00:07:17.104 "bdev_name": "Nvme3n1" 00:07:17.104 } 00:07:17.104 ]' 00:07:17.104 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd0", 00:07:17.104 "bdev_name": "Nvme0n1" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd1", 00:07:17.104 "bdev_name": "Nvme1n1" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd10", 00:07:17.104 "bdev_name": "Nvme2n1" 00:07:17.104 }, 00:07:17.104 { 00:07:17.104 "nbd_device": "/dev/nbd11", 00:07:17.105 "bdev_name": "Nvme2n2" 00:07:17.105 }, 00:07:17.105 { 00:07:17.105 "nbd_device": "/dev/nbd12", 00:07:17.105 "bdev_name": "Nvme2n3" 00:07:17.105 }, 00:07:17.105 { 00:07:17.105 "nbd_device": "/dev/nbd13", 00:07:17.105 "bdev_name": "Nvme3n1" 00:07:17.105 } 00:07:17.105 ]' 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.105 /dev/nbd1 00:07:17.105 /dev/nbd10 00:07:17.105 /dev/nbd11 00:07:17.105 /dev/nbd12 00:07:17.105 /dev/nbd13' 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.105 /dev/nbd1 00:07:17.105 /dev/nbd10 00:07:17.105 /dev/nbd11 00:07:17.105 /dev/nbd12 00:07:17.105 /dev/nbd13' 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.105 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:17.105 256+0 records in 00:07:17.105 256+0 records out 00:07:17.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013258 s, 79.1 MB/s 00:07:17.364 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.364 10:51:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.364 256+0 records in 00:07:17.364 256+0 records out 00:07:17.364 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128078 s, 8.2 MB/s 00:07:17.364 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.364 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.623 256+0 records in 00:07:17.623 256+0 records out 00:07:17.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130276 s, 8.0 MB/s 00:07:17.623 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.623 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:17.623 256+0 records in 00:07:17.623 256+0 records out 00:07:17.623 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126265 s, 8.3 MB/s 00:07:17.623 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.623 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:17.882 256+0 records in 00:07:17.882 256+0 records out 00:07:17.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12976 s, 8.1 MB/s 00:07:17.882 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.882 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:17.882 256+0 records in 00:07:17.882 256+0 records out 00:07:17.882 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140126 s, 7.5 MB/s 00:07:17.882 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.882 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:18.139 256+0 records in 00:07:18.139 256+0 records out 00:07:18.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138406 s, 7.6 MB/s 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.139 10:51:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.397 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.655 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.913 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.171 10:51:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:19.429 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:19.429 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:19.429 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.430 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:19.688 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:19.946 malloc_lvol_verify 00:07:19.946 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:20.204 5c0b6dcc-761b-4b91-9eb4-47253d0686cf 00:07:20.204 10:51:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:20.499 3c638ee3-468a-4780-8201-1811381737c1 00:07:20.499 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:20.499 /dev/nbd0 00:07:20.499 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:20.499 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:20.499 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:20.499 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:20.499 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:20.499 mke2fs 1.47.0 (5-Feb-2023) 00:07:20.499 Discarding device blocks: 0/4096 done 00:07:20.499 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:20.499 00:07:20.499 Allocating group tables: 0/1 done 00:07:20.499 Writing inode tables: 0/1 done 00:07:20.769 Creating journal (1024 blocks): done 00:07:20.769 Writing superblocks and filesystem accounting information: 0/1 done 00:07:20.769 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61156 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61156 ']' 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61156 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.769 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61156 00:07:21.028 killing process with pid 61156 00:07:21.028 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.028 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.028 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61156' 00:07:21.028 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61156 00:07:21.028 10:51:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61156 00:07:22.405 10:51:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:22.405 00:07:22.405 real 0m11.263s 00:07:22.405 user 0m14.645s 00:07:22.405 sys 0m4.579s 00:07:22.405 ************************************ 00:07:22.405 END TEST bdev_nbd 00:07:22.405 ************************************ 00:07:22.405 10:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.405 10:51:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 10:51:08 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:22.405 skipping fio tests on NVMe due to multi-ns failures. 00:07:22.405 10:51:08 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:22.405 10:51:08 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:22.405 10:51:08 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.405 10:51:08 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.405 10:51:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:22.405 10:51:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.405 10:51:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.405 ************************************ 00:07:22.405 START TEST bdev_verify 00:07:22.405 ************************************ 00:07:22.405 10:51:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.405 [2024-11-15 10:51:09.011489] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:22.405 [2024-11-15 10:51:09.011630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61545 ] 00:07:22.405 [2024-11-15 10:51:09.194570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.663 [2024-11-15 10:51:09.315747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.663 [2024-11-15 10:51:09.315773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.229 Running I/O for 5 seconds... 00:07:25.542 22656.00 IOPS, 88.50 MiB/s [2024-11-15T10:51:13.338Z] 22784.00 IOPS, 89.00 MiB/s [2024-11-15T10:51:14.275Z] 23061.33 IOPS, 90.08 MiB/s [2024-11-15T10:51:15.231Z] 23104.00 IOPS, 90.25 MiB/s [2024-11-15T10:51:15.231Z] 23014.40 IOPS, 89.90 MiB/s 00:07:28.370 Latency(us) 00:07:28.370 [2024-11-15T10:51:15.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.370 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x0 length 0xbd0bd 00:07:28.370 Nvme0n1 : 5.06 1885.43 7.36 0.00 0.00 67570.22 9896.20 77485.13 00:07:28.370 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:28.370 Nvme0n1 : 5.03 1908.21 7.45 0.00 0.00 66854.71 14528.46 72431.76 00:07:28.370 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x0 length 0xa0000 00:07:28.370 Nvme1n1 : 5.07 1892.44 7.39 0.00 0.00 67241.03 10633.15 62746.11 00:07:28.370 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0xa0000 length 0xa0000 00:07:28.370 Nvme1n1 : 5.06 1908.65 7.46 0.00 0.00 66673.68 8948.69 69483.95 00:07:28.370 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x0 length 0x80000 00:07:28.370 Nvme2n1 : 5.07 1891.92 7.39 0.00 0.00 67057.92 10791.07 61482.77 00:07:28.370 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x80000 length 0x80000 00:07:28.370 Nvme2n1 : 5.08 1916.40 7.49 0.00 0.00 66399.14 9685.64 71168.41 00:07:28.370 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x0 length 0x80000 00:07:28.370 Nvme2n2 : 5.08 1891.01 7.39 0.00 0.00 66961.79 12317.61 64009.46 00:07:28.370 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x80000 length 0x80000 00:07:28.370 Nvme2n2 : 5.08 1915.25 7.48 0.00 0.00 66313.03 11106.90 72431.76 00:07:28.370 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x0 length 0x80000 00:07:28.370 Nvme2n3 : 5.08 1890.44 7.38 0.00 0.00 66895.06 12633.45 65272.80 00:07:28.370 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x80000 length 0x80000 00:07:28.370 Nvme2n3 : 5.08 1914.38 7.48 0.00 0.00 66228.92 12475.53 74116.22 00:07:28.370 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x0 length 0x20000 00:07:28.370 Nvme3n1 : 5.08 1889.98 7.38 0.00 0.00 66804.74 10843.71 67378.38 00:07:28.370 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:28.370 Verification LBA range: start 0x20000 length 0x20000 00:07:28.370 Nvme3n1 : 5.08 1913.98 7.48 0.00 0.00 66131.42 12475.53 74958.44 00:07:28.370 [2024-11-15T10:51:15.231Z] =================================================================================================================== 00:07:28.370 [2024-11-15T10:51:15.231Z] Total : 22818.09 89.13 0.00 0.00 66758.29 8948.69 77485.13 00:07:29.749 00:07:29.749 real 0m7.668s 00:07:29.749 user 0m14.184s 00:07:29.749 sys 0m0.303s 00:07:29.749 10:51:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.749 10:51:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:29.749 ************************************ 00:07:29.749 END TEST bdev_verify 00:07:29.749 ************************************ 00:07:30.008 10:51:16 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.008 10:51:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:30.008 10:51:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.008 10:51:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:30.008 ************************************ 00:07:30.008 START TEST bdev_verify_big_io 00:07:30.008 ************************************ 00:07:30.008 10:51:16 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:30.008 [2024-11-15 10:51:16.734639] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:30.008 [2024-11-15 10:51:16.734779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61650 ] 00:07:30.267 [2024-11-15 10:51:16.914951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:30.267 [2024-11-15 10:51:17.030722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.267 [2024-11-15 10:51:17.030727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.205 Running I/O for 5 seconds... 00:07:35.117 1965.00 IOPS, 122.81 MiB/s [2024-11-15T10:51:23.900Z] 2893.00 IOPS, 180.81 MiB/s [2024-11-15T10:51:23.900Z] 2646.67 IOPS, 165.42 MiB/s 00:07:37.039 Latency(us) 00:07:37.039 [2024-11-15T10:51:23.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.039 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x0 length 0xbd0b 00:07:37.039 Nvme0n1 : 5.65 156.60 9.79 0.00 0.00 796396.20 30951.94 862443.23 00:07:37.039 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:37.039 Nvme0n1 : 5.55 160.33 10.02 0.00 0.00 779505.61 42322.04 835491.88 00:07:37.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x0 length 0xa000 00:07:37.039 Nvme1n1 : 5.65 154.79 9.67 0.00 0.00 779388.20 32215.29 727686.48 00:07:37.039 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0xa000 length 0xa000 00:07:37.039 Nvme1n1 : 5.55 161.34 10.08 0.00 0.00 757107.65 72852.87 727686.48 00:07:37.039 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x0 length 0x8000 00:07:37.039 Nvme2n1 : 5.65 158.60 9.91 0.00 0.00 747645.96 80854.05 744531.07 00:07:37.039 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x8000 length 0x8000 00:07:37.039 Nvme2n1 : 5.56 161.26 10.08 0.00 0.00 737203.39 74116.22 710841.88 00:07:37.039 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x0 length 0x8000 00:07:37.039 Nvme2n2 : 5.65 158.54 9.91 0.00 0.00 728376.43 81696.28 761375.67 00:07:37.039 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x8000 length 0x8000 00:07:37.039 Nvme2n2 : 5.63 163.70 10.23 0.00 0.00 706204.82 67378.38 734424.31 00:07:37.039 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x0 length 0x8000 00:07:37.039 Nvme2n3 : 5.71 168.06 10.50 0.00 0.00 672378.85 31794.17 771482.42 00:07:37.039 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x8000 length 0x8000 00:07:37.039 Nvme2n3 : 5.71 175.42 10.96 0.00 0.00 644687.34 23477.15 747899.99 00:07:37.039 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x0 length 0x2000 00:07:37.039 Nvme3n1 : 5.75 181.93 11.37 0.00 0.00 607648.90 2302.97 791695.94 00:07:37.039 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:37.039 Verification LBA range: start 0x2000 length 0x2000 00:07:37.039 Nvme3n1 : 5.75 196.25 12.27 0.00 0.00 565047.68 1197.55 764744.58 00:07:37.039 [2024-11-15T10:51:23.900Z] =================================================================================================================== 00:07:37.039 [2024-11-15T10:51:23.900Z] Total : 1996.82 124.80 0.00 0.00 704667.48 1197.55 862443.23 00:07:38.943 00:07:38.943 real 0m8.873s 00:07:38.943 user 0m16.572s 00:07:38.943 sys 0m0.328s 00:07:38.943 10:51:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.943 ************************************ 00:07:38.943 END TEST bdev_verify_big_io 00:07:38.943 ************************************ 00:07:38.943 10:51:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:38.943 10:51:25 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.943 10:51:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.943 10:51:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.943 10:51:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.943 ************************************ 00:07:38.943 START TEST bdev_write_zeroes 00:07:38.943 ************************************ 00:07:38.943 10:51:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.943 [2024-11-15 10:51:25.684763] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:38.943 [2024-11-15 10:51:25.684891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61759 ] 00:07:39.202 [2024-11-15 10:51:25.863431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.202 [2024-11-15 10:51:25.978409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.137 Running I/O for 1 seconds... 00:07:41.069 79104.00 IOPS, 309.00 MiB/s 00:07:41.069 Latency(us) 00:07:41.069 [2024-11-15T10:51:27.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.069 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:41.069 Nvme0n1 : 1.02 13096.37 51.16 0.00 0.00 9754.12 8211.74 22950.76 00:07:41.069 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:41.069 Nvme1n1 : 1.02 13083.65 51.11 0.00 0.00 9752.62 8422.30 23371.87 00:07:41.069 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:41.070 Nvme2n1 : 1.02 13072.11 51.06 0.00 0.00 9720.20 8211.74 20318.79 00:07:41.070 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:41.070 Nvme2n2 : 1.02 13060.73 51.02 0.00 0.00 9699.24 8159.10 18423.78 00:07:41.070 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:41.070 Nvme2n3 : 1.03 13048.64 50.97 0.00 0.00 9674.64 8211.74 17897.38 00:07:41.070 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:41.070 Nvme3n1 : 1.03 13036.88 50.93 0.00 0.00 9657.50 7053.67 19371.28 00:07:41.070 [2024-11-15T10:51:27.931Z] =================================================================================================================== 00:07:41.070 [2024-11-15T10:51:27.931Z] Total : 78398.37 306.24 0.00 0.00 9709.72 7053.67 23371.87 00:07:42.022 00:07:42.022 real 0m3.247s 00:07:42.022 user 0m2.867s 00:07:42.022 sys 0m0.267s 00:07:42.022 10:51:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.022 ************************************ 00:07:42.022 END TEST bdev_write_zeroes 00:07:42.022 ************************************ 00:07:42.022 10:51:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:42.281 10:51:28 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.281 10:51:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:42.281 10:51:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.281 10:51:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.281 ************************************ 00:07:42.281 START TEST bdev_json_nonenclosed 00:07:42.281 ************************************ 00:07:42.281 10:51:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.281 [2024-11-15 10:51:28.997050] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:42.281 [2024-11-15 10:51:28.997179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61814 ] 00:07:42.540 [2024-11-15 10:51:29.179801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.540 [2024-11-15 10:51:29.298080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.540 [2024-11-15 10:51:29.298192] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:42.540 [2024-11-15 10:51:29.298213] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:42.540 [2024-11-15 10:51:29.298225] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.799 00:07:42.799 real 0m0.646s 00:07:42.799 user 0m0.410s 00:07:42.799 sys 0m0.132s 00:07:42.799 10:51:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.799 ************************************ 00:07:42.799 END TEST bdev_json_nonenclosed 00:07:42.799 ************************************ 00:07:42.799 10:51:29 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:42.799 10:51:29 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:42.799 10:51:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:42.799 10:51:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.799 10:51:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:42.799 ************************************ 00:07:42.799 START TEST bdev_json_nonarray 00:07:42.799 ************************************ 00:07:42.799 10:51:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:43.059 [2024-11-15 10:51:29.721635] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:43.059 [2024-11-15 10:51:29.721748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61838 ] 00:07:43.059 [2024-11-15 10:51:29.902229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.317 [2024-11-15 10:51:30.020475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.318 [2024-11-15 10:51:30.020587] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:43.318 [2024-11-15 10:51:30.020612] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:43.318 [2024-11-15 10:51:30.020624] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.577 00:07:43.577 real 0m0.656s 00:07:43.577 user 0m0.403s 00:07:43.577 sys 0m0.148s 00:07:43.577 10:51:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.577 10:51:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:43.577 ************************************ 00:07:43.577 END TEST bdev_json_nonarray 00:07:43.577 ************************************ 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:43.577 10:51:30 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:43.577 00:07:43.577 real 0m42.636s 00:07:43.577 user 1m3.127s 00:07:43.577 sys 0m7.595s 00:07:43.577 10:51:30 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.577 ************************************ 00:07:43.577 END TEST blockdev_nvme 00:07:43.577 ************************************ 00:07:43.577 10:51:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:43.577 10:51:30 -- spdk/autotest.sh@209 -- # uname -s 00:07:43.577 10:51:30 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:43.577 10:51:30 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:43.577 10:51:30 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.577 10:51:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.577 10:51:30 -- common/autotest_common.sh@10 -- # set +x 00:07:43.577 ************************************ 00:07:43.577 START TEST blockdev_nvme_gpt 00:07:43.577 ************************************ 00:07:43.577 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:43.836 * Looking for test storage... 00:07:43.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:43.836 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.836 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.836 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.836 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.836 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.837 10:51:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.837 --rc genhtml_branch_coverage=1 00:07:43.837 --rc genhtml_function_coverage=1 00:07:43.837 --rc genhtml_legend=1 00:07:43.837 --rc geninfo_all_blocks=1 00:07:43.837 --rc geninfo_unexecuted_blocks=1 00:07:43.837 00:07:43.837 ' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.837 --rc genhtml_branch_coverage=1 00:07:43.837 --rc genhtml_function_coverage=1 00:07:43.837 --rc genhtml_legend=1 00:07:43.837 --rc geninfo_all_blocks=1 00:07:43.837 --rc geninfo_unexecuted_blocks=1 00:07:43.837 00:07:43.837 ' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.837 --rc genhtml_branch_coverage=1 00:07:43.837 --rc genhtml_function_coverage=1 00:07:43.837 --rc genhtml_legend=1 00:07:43.837 --rc geninfo_all_blocks=1 00:07:43.837 --rc geninfo_unexecuted_blocks=1 00:07:43.837 00:07:43.837 ' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.837 --rc genhtml_branch_coverage=1 00:07:43.837 --rc genhtml_function_coverage=1 00:07:43.837 --rc genhtml_legend=1 00:07:43.837 --rc geninfo_all_blocks=1 00:07:43.837 --rc geninfo_unexecuted_blocks=1 00:07:43.837 00:07:43.837 ' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61922 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61922 00:07:43.837 10:51:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61922 ']' 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.837 10:51:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.096 [2024-11-15 10:51:30.780360] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:44.096 [2024-11-15 10:51:30.780907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61922 ] 00:07:44.356 [2024-11-15 10:51:30.964259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.356 [2024-11-15 10:51:31.083340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.294 10:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.294 10:51:31 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:45.294 10:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:45.294 10:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:45.294 10:51:31 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:45.863 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.863 Waiting for block devices as requested 00:07:46.122 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.122 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.122 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.382 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:51.660 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:51.660 BYT; 00:07:51.660 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:51.660 BYT; 00:07:51.660 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:51.660 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:51.660 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:51.660 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:51.660 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:51.660 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:51.660 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:51.660 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:51.661 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:51.661 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:51.661 10:51:38 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:51.661 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:51.661 10:51:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:52.597 The operation has completed successfully. 00:07:52.597 10:51:39 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:53.977 The operation has completed successfully. 00:07:53.977 10:51:40 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:54.237 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:55.189 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.189 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.189 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.189 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:55.189 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:55.189 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.189 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.189 [] 00:07:55.189 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.189 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:55.189 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:55.189 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:55.189 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:55.453 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:55.453 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.453 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.712 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:55.712 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:55.973 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.973 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:55.973 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:55.974 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "110a12b8-022d-43c1-8ff6-8fe2a8f04f7f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "110a12b8-022d-43c1-8ff6-8fe2a8f04f7f",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "31b48db0-0a9f-446c-8b52-7641532ece8d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "31b48db0-0a9f-446c-8b52-7641532ece8d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "47b9d0d8-8ffd-41b3-b58d-ebf8fd8773ba"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47b9d0d8-8ffd-41b3-b58d-ebf8fd8773ba",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "3fc4640b-34d0-4fa9-b58c-8e1d7d7bb2fc"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3fc4640b-34d0-4fa9-b58c-8e1d7d7bb2fc",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "379308a5-14b6-4500-8b79-1eb521615203"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "379308a5-14b6-4500-8b79-1eb521615203",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:55.974 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:55.974 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:55.974 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:55.974 10:51:42 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61922 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61922 ']' 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61922 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61922 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:55.974 killing process with pid 61922 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61922' 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61922 00:07:55.974 10:51:42 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61922 00:07:58.507 10:51:45 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:58.507 10:51:45 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.507 10:51:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:58.507 10:51:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.507 10:51:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:58.507 ************************************ 00:07:58.507 START TEST bdev_hello_world 00:07:58.507 ************************************ 00:07:58.507 10:51:45 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:58.507 [2024-11-15 10:51:45.226623] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:07:58.507 [2024-11-15 10:51:45.226763] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62570 ] 00:07:58.767 [2024-11-15 10:51:45.407683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.767 [2024-11-15 10:51:45.527979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.336 [2024-11-15 10:51:46.183918] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:59.336 [2024-11-15 10:51:46.183969] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:59.336 [2024-11-15 10:51:46.184009] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:59.336 [2024-11-15 10:51:46.186982] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:59.336 [2024-11-15 10:51:46.187758] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:59.336 [2024-11-15 10:51:46.187797] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:59.336 [2024-11-15 10:51:46.188047] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:59.336 00:07:59.336 [2024-11-15 10:51:46.188075] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:00.714 00:08:00.714 real 0m2.155s 00:08:00.714 user 0m1.800s 00:08:00.714 sys 0m0.247s 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:00.714 ************************************ 00:08:00.714 END TEST bdev_hello_world 00:08:00.714 ************************************ 00:08:00.714 10:51:47 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:00.714 10:51:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:00.714 10:51:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.714 10:51:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.714 ************************************ 00:08:00.714 START TEST bdev_bounds 00:08:00.714 ************************************ 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62612 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:00.714 Process bdevio pid: 62612 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62612' 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62612 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62612 ']' 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.714 10:51:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:00.714 [2024-11-15 10:51:47.460515] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:00.714 [2024-11-15 10:51:47.460656] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62612 ] 00:08:00.973 [2024-11-15 10:51:47.644118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.973 [2024-11-15 10:51:47.767767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.973 [2024-11-15 10:51:47.767922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.973 [2024-11-15 10:51:47.767951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:01.911 10:51:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.911 10:51:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:01.911 10:51:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:01.911 I/O targets: 00:08:01.911 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:01.911 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:01.911 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:01.911 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.911 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.911 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:01.911 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:01.911 00:08:01.911 00:08:01.911 CUnit - A unit testing framework for C - Version 2.1-3 00:08:01.911 http://cunit.sourceforge.net/ 00:08:01.911 00:08:01.911 00:08:01.911 Suite: bdevio tests on: Nvme3n1 00:08:01.911 Test: blockdev write read block ...passed 00:08:01.911 Test: blockdev write zeroes read block ...passed 00:08:01.911 Test: blockdev write zeroes read no split ...passed 00:08:01.911 Test: blockdev write zeroes read split ...passed 00:08:01.911 Test: blockdev write zeroes read split partial ...passed 00:08:01.911 Test: blockdev reset ...[2024-11-15 10:51:48.636362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:01.911 [2024-11-15 10:51:48.640204] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:01.911 passed 00:08:01.911 Test: blockdev write read 8 blocks ...passed 00:08:01.911 Test: blockdev write read size > 128k ...passed 00:08:01.911 Test: blockdev write read invalid size ...passed 00:08:01.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.911 Test: blockdev write read max offset ...passed 00:08:01.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.911 Test: blockdev writev readv 8 blocks ...passed 00:08:01.911 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.911 Test: blockdev writev readv block ...passed 00:08:01.911 Test: blockdev writev readv size > 128k ...passed 00:08:01.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.911 Test: blockdev comparev and writev ...[2024-11-15 10:51:48.648882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af004000 len:0x1000 00:08:01.911 [2024-11-15 10:51:48.648932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.911 passed 00:08:01.911 Test: blockdev nvme passthru rw ...passed 00:08:01.911 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:51:48.649783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.911 [2024-11-15 10:51:48.649819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.911 passed 00:08:01.911 Test: blockdev nvme admin passthru ...passed 00:08:01.911 Test: blockdev copy ...passed 00:08:01.911 Suite: bdevio tests on: Nvme2n3 00:08:01.911 Test: blockdev write read block ...passed 00:08:01.911 Test: blockdev write zeroes read block ...passed 00:08:01.911 Test: blockdev write zeroes read no split ...passed 00:08:01.911 Test: blockdev write zeroes read split ...passed 00:08:01.911 Test: blockdev write zeroes read split partial ...passed 00:08:01.911 Test: blockdev reset ...[2024-11-15 10:51:48.731627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:01.911 [2024-11-15 10:51:48.735867] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:01.911 passed 00:08:01.911 Test: blockdev write read 8 blocks ...passed 00:08:01.911 Test: blockdev write read size > 128k ...passed 00:08:01.911 Test: blockdev write read invalid size ...passed 00:08:01.911 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:01.911 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:01.911 Test: blockdev write read max offset ...passed 00:08:01.911 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:01.911 Test: blockdev writev readv 8 blocks ...passed 00:08:01.911 Test: blockdev writev readv 30 x 1block ...passed 00:08:01.911 Test: blockdev writev readv block ...passed 00:08:01.911 Test: blockdev writev readv size > 128k ...passed 00:08:01.911 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:01.911 Test: blockdev comparev and writev ...[2024-11-15 10:51:48.744377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2af002000 len:0x1000 00:08:01.911 [2024-11-15 10:51:48.744428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:01.911 passed 00:08:01.911 Test: blockdev nvme passthru rw ...passed 00:08:01.911 Test: blockdev nvme passthru vendor specific ...passed 00:08:01.911 Test: blockdev nvme admin passthru ...[2024-11-15 10:51:48.745169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:01.911 [2024-11-15 10:51:48.745203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:01.911 passed 00:08:01.911 Test: blockdev copy ...passed 00:08:01.911 Suite: bdevio tests on: Nvme2n2 00:08:01.911 Test: blockdev write read block ...passed 00:08:01.911 Test: blockdev write zeroes read block ...passed 00:08:01.911 Test: blockdev write zeroes read no split ...passed 00:08:02.171 Test: blockdev write zeroes read split ...passed 00:08:02.171 Test: blockdev write zeroes read split partial ...passed 00:08:02.171 Test: blockdev reset ...[2024-11-15 10:51:48.826798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:02.171 [2024-11-15 10:51:48.831007] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:02.171 passed 00:08:02.171 Test: blockdev write read 8 blocks ...passed 00:08:02.171 Test: blockdev write read size > 128k ...passed 00:08:02.171 Test: blockdev write read invalid size ...passed 00:08:02.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.171 Test: blockdev write read max offset ...passed 00:08:02.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.171 Test: blockdev writev readv 8 blocks ...passed 00:08:02.171 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.171 Test: blockdev writev readv block ...passed 00:08:02.171 Test: blockdev writev readv size > 128k ...passed 00:08:02.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.171 Test: blockdev comparev and writev ...[2024-11-15 10:51:48.839139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:08:02.171 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c1e38000 len:0x1000 00:08:02.171 [2024-11-15 10:51:48.839295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.171 passed 00:08:02.171 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.171 Test: blockdev nvme admin passthru ...[2024-11-15 10:51:48.840092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.171 [2024-11-15 10:51:48.840129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.171 passed 00:08:02.171 Test: blockdev copy ...passed 00:08:02.171 Suite: bdevio tests on: Nvme2n1 00:08:02.171 Test: blockdev write read block ...passed 00:08:02.171 Test: blockdev write zeroes read block ...passed 00:08:02.171 Test: blockdev write zeroes read no split ...passed 00:08:02.171 Test: blockdev write zeroes read split ...passed 00:08:02.171 Test: blockdev write zeroes read split partial ...passed 00:08:02.171 Test: blockdev reset ...[2024-11-15 10:51:48.933809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:02.171 [2024-11-15 10:51:48.938115] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:02.171 Test: blockdev write read 8 blocks ...uccessful. 00:08:02.171 passed 00:08:02.171 Test: blockdev write read size > 128k ...passed 00:08:02.171 Test: blockdev write read invalid size ...passed 00:08:02.171 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.171 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.171 Test: blockdev write read max offset ...passed 00:08:02.171 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.171 Test: blockdev writev readv 8 blocks ...passed 00:08:02.171 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.171 Test: blockdev writev readv block ...passed 00:08:02.171 Test: blockdev writev readv size > 128k ...passed 00:08:02.171 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.171 Test: blockdev comparev and writev ...[2024-11-15 10:51:48.948072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1e34000 len:0x1000 00:08:02.171 [2024-11-15 10:51:48.948256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.171 passed 00:08:02.172 Test: blockdev nvme passthru rw ...passed 00:08:02.172 Test: blockdev nvme passthru vendor specific ...[2024-11-15 10:51:48.949264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:02.172 [2024-11-15 10:51:48.949436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:02.172 passed 00:08:02.172 Test: blockdev nvme admin passthru ...passed 00:08:02.172 Test: blockdev copy ...passed 00:08:02.172 Suite: bdevio tests on: Nvme1n1p2 00:08:02.172 Test: blockdev write read block ...passed 00:08:02.172 Test: blockdev write zeroes read block ...passed 00:08:02.172 Test: blockdev write zeroes read no split ...passed 00:08:02.172 Test: blockdev write zeroes read split ...passed 00:08:02.431 Test: blockdev write zeroes read split partial ...passed 00:08:02.431 Test: blockdev reset ...[2024-11-15 10:51:49.029389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:02.431 [2024-11-15 10:51:49.033195] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:02.431 passed 00:08:02.431 Test: blockdev write read 8 blocks ...passed 00:08:02.431 Test: blockdev write read size > 128k ...passed 00:08:02.431 Test: blockdev write read invalid size ...passed 00:08:02.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.431 Test: blockdev write read max offset ...passed 00:08:02.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.431 Test: blockdev writev readv 8 blocks ...passed 00:08:02.431 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.431 Test: blockdev writev readv block ...passed 00:08:02.431 Test: blockdev writev readv size > 128k ...passed 00:08:02.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.431 Test: blockdev comparev and writev ...[2024-11-15 10:51:49.043480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c1e30000 len:0x1000 00:08:02.431 [2024-11-15 10:51:49.043662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.431 passed 00:08:02.431 Test: blockdev nvme passthru rw ...passed 00:08:02.431 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.431 Test: blockdev nvme admin passthru ...passed 00:08:02.431 Test: blockdev copy ...passed 00:08:02.431 Suite: bdevio tests on: Nvme1n1p1 00:08:02.431 Test: blockdev write read block ...passed 00:08:02.431 Test: blockdev write zeroes read block ...passed 00:08:02.431 Test: blockdev write zeroes read no split ...passed 00:08:02.431 Test: blockdev write zeroes read split ...passed 00:08:02.431 Test: blockdev write zeroes read split partial ...passed 00:08:02.431 Test: blockdev reset ...[2024-11-15 10:51:49.113516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:02.431 [2024-11-15 10:51:49.117485] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:02.431 passed 00:08:02.431 Test: blockdev write read 8 blocks ...passed 00:08:02.431 Test: blockdev write read size > 128k ...passed 00:08:02.431 Test: blockdev write read invalid size ...passed 00:08:02.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.431 Test: blockdev write read max offset ...passed 00:08:02.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.431 Test: blockdev writev readv 8 blocks ...passed 00:08:02.431 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.431 Test: blockdev writev readv block ...passed 00:08:02.431 Test: blockdev writev readv size > 128k ...passed 00:08:02.431 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.431 Test: blockdev comparev and writev ...[2024-11-15 10:51:49.127511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2afa0e000 len:0x1000 00:08:02.431 [2024-11-15 10:51:49.127566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:02.432 passed 00:08:02.432 Test: blockdev nvme passthru rw ...passed 00:08:02.432 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.432 Test: blockdev nvme admin passthru ...passed 00:08:02.432 Test: blockdev copy ...passed 00:08:02.432 Suite: bdevio tests on: Nvme0n1 00:08:02.432 Test: blockdev write read block ...passed 00:08:02.432 Test: blockdev write zeroes read block ...passed 00:08:02.432 Test: blockdev write zeroes read no split ...passed 00:08:02.432 Test: blockdev write zeroes read split ...passed 00:08:02.432 Test: blockdev write zeroes read split partial ...passed 00:08:02.432 Test: blockdev reset ...[2024-11-15 10:51:49.196686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:02.432 [2024-11-15 10:51:49.200393] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:02.432 Test: blockdev write read 8 blocks ...uccessful. 00:08:02.432 passed 00:08:02.432 Test: blockdev write read size > 128k ...passed 00:08:02.432 Test: blockdev write read invalid size ...passed 00:08:02.432 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:02.432 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:02.432 Test: blockdev write read max offset ...passed 00:08:02.432 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:02.432 Test: blockdev writev readv 8 blocks ...passed 00:08:02.432 Test: blockdev writev readv 30 x 1block ...passed 00:08:02.432 Test: blockdev writev readv block ...passed 00:08:02.432 Test: blockdev writev readv size > 128k ...passed 00:08:02.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:02.432 Test: blockdev comparev and writev ...passed 00:08:02.432 Test: blockdev nvme passthru rw ...[2024-11-15 10:51:49.208900] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:02.432 separate metadata which is not supported yet. 00:08:02.432 passed 00:08:02.432 Test: blockdev nvme passthru vendor specific ...passed 00:08:02.432 Test: blockdev nvme admin passthru ...[2024-11-15 10:51:49.209521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:02.432 [2024-11-15 10:51:49.209577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:02.432 passed 00:08:02.432 Test: blockdev copy ...passed 00:08:02.432 00:08:02.432 Run Summary: Type Total Ran Passed Failed Inactive 00:08:02.432 suites 7 7 n/a 0 0 00:08:02.432 tests 161 161 161 0 0 00:08:02.432 asserts 1025 1025 1025 0 n/a 00:08:02.432 00:08:02.432 Elapsed time = 1.761 seconds 00:08:02.432 0 00:08:02.432 10:51:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62612 00:08:02.432 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62612 ']' 00:08:02.432 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62612 00:08:02.432 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:02.432 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.432 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62612 00:08:02.691 killing process with pid 62612 00:08:02.691 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.691 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.691 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62612' 00:08:02.691 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62612 00:08:02.691 10:51:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62612 00:08:03.629 ************************************ 00:08:03.629 END TEST bdev_bounds 00:08:03.629 ************************************ 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:03.629 00:08:03.629 real 0m2.968s 00:08:03.629 user 0m7.609s 00:08:03.629 sys 0m0.409s 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 10:51:50 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.629 10:51:50 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.629 10:51:50 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.629 10:51:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:03.629 ************************************ 00:08:03.629 START TEST bdev_nbd 00:08:03.629 ************************************ 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.629 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62677 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62677 /var/tmp/spdk-nbd.sock 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62677 ']' 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:03.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.630 10:51:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:03.889 [2024-11-15 10:51:50.510628] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:03.889 [2024-11-15 10:51:50.510770] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.889 [2024-11-15 10:51:50.693029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.148 [2024-11-15 10:51:50.812199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.717 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:04.976 1+0 records in 00:08:04.976 1+0 records out 00:08:04.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000506128 s, 8.1 MB/s 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:04.976 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:05.235 10:51:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.235 1+0 records in 00:08:05.235 1+0 records out 00:08:05.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630864 s, 6.5 MB/s 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.235 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.236 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.236 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:05.494 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:05.494 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.495 1+0 records in 00:08:05.495 1+0 records out 00:08:05.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683548 s, 6.0 MB/s 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.495 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:05.753 1+0 records in 00:08:05.753 1+0 records out 00:08:05.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686982 s, 6.0 MB/s 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:05.753 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.012 1+0 records in 00:08:06.012 1+0 records out 00:08:06.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759169 s, 5.4 MB/s 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:06.012 10:51:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.271 1+0 records in 00:08:06.271 1+0 records out 00:08:06.271 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848521 s, 4.8 MB/s 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:06.271 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:06.530 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:06.530 1+0 records in 00:08:06.531 1+0 records out 00:08:06.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000683545 s, 6.0 MB/s 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:06.531 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.790 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd0", 00:08:06.790 "bdev_name": "Nvme0n1" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd1", 00:08:06.790 "bdev_name": "Nvme1n1p1" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd2", 00:08:06.790 "bdev_name": "Nvme1n1p2" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd3", 00:08:06.790 "bdev_name": "Nvme2n1" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd4", 00:08:06.790 "bdev_name": "Nvme2n2" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd5", 00:08:06.790 "bdev_name": "Nvme2n3" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd6", 00:08:06.790 "bdev_name": "Nvme3n1" 00:08:06.790 } 00:08:06.790 ]' 00:08:06.790 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:06.790 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd0", 00:08:06.790 "bdev_name": "Nvme0n1" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd1", 00:08:06.790 "bdev_name": "Nvme1n1p1" 00:08:06.790 }, 00:08:06.790 { 00:08:06.790 "nbd_device": "/dev/nbd2", 00:08:06.791 "bdev_name": "Nvme1n1p2" 00:08:06.791 }, 00:08:06.791 { 00:08:06.791 "nbd_device": "/dev/nbd3", 00:08:06.791 "bdev_name": "Nvme2n1" 00:08:06.791 }, 00:08:06.791 { 00:08:06.791 "nbd_device": "/dev/nbd4", 00:08:06.791 "bdev_name": "Nvme2n2" 00:08:06.791 }, 00:08:06.791 { 00:08:06.791 "nbd_device": "/dev/nbd5", 00:08:06.791 "bdev_name": "Nvme2n3" 00:08:06.791 }, 00:08:06.791 { 00:08:06.791 "nbd_device": "/dev/nbd6", 00:08:06.791 "bdev_name": "Nvme3n1" 00:08:06.791 } 00:08:06.791 ]' 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.791 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.064 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.345 10:51:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.609 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.869 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:08.127 10:51:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.386 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:08.645 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.646 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:08.905 /dev/nbd0 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.905 1+0 records in 00:08:08.905 1+0 records out 00:08:08.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000456634 s, 9.0 MB/s 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:08.905 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:09.164 /dev/nbd1 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.164 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.165 1+0 records in 00:08:09.165 1+0 records out 00:08:09.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000638266 s, 6.4 MB/s 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.165 10:51:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:09.424 /dev/nbd10 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.424 1+0 records in 00:08:09.424 1+0 records out 00:08:09.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000555027 s, 7.4 MB/s 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.424 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:09.683 /dev/nbd11 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.683 1+0 records in 00:08:09.683 1+0 records out 00:08:09.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080309 s, 5.1 MB/s 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.683 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:09.943 /dev/nbd12 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:09.943 1+0 records in 00:08:09.943 1+0 records out 00:08:09.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511227 s, 8.0 MB/s 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:09.943 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:10.202 /dev/nbd13 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.202 1+0 records in 00:08:10.202 1+0 records out 00:08:10.202 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847289 s, 4.8 MB/s 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.202 10:51:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:10.461 /dev/nbd14 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:10.461 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.462 1+0 records in 00:08:10.462 1+0 records out 00:08:10.462 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665127 s, 6.2 MB/s 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.462 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd0", 00:08:10.722 "bdev_name": "Nvme0n1" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd1", 00:08:10.722 "bdev_name": "Nvme1n1p1" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd10", 00:08:10.722 "bdev_name": "Nvme1n1p2" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd11", 00:08:10.722 "bdev_name": "Nvme2n1" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd12", 00:08:10.722 "bdev_name": "Nvme2n2" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd13", 00:08:10.722 "bdev_name": "Nvme2n3" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd14", 00:08:10.722 "bdev_name": "Nvme3n1" 00:08:10.722 } 00:08:10.722 ]' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd0", 00:08:10.722 "bdev_name": "Nvme0n1" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd1", 00:08:10.722 "bdev_name": "Nvme1n1p1" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd10", 00:08:10.722 "bdev_name": "Nvme1n1p2" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd11", 00:08:10.722 "bdev_name": "Nvme2n1" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd12", 00:08:10.722 "bdev_name": "Nvme2n2" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd13", 00:08:10.722 "bdev_name": "Nvme2n3" 00:08:10.722 }, 00:08:10.722 { 00:08:10.722 "nbd_device": "/dev/nbd14", 00:08:10.722 "bdev_name": "Nvme3n1" 00:08:10.722 } 00:08:10.722 ]' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:10.722 /dev/nbd1 00:08:10.722 /dev/nbd10 00:08:10.722 /dev/nbd11 00:08:10.722 /dev/nbd12 00:08:10.722 /dev/nbd13 00:08:10.722 /dev/nbd14' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:10.722 /dev/nbd1 00:08:10.722 /dev/nbd10 00:08:10.722 /dev/nbd11 00:08:10.722 /dev/nbd12 00:08:10.722 /dev/nbd13 00:08:10.722 /dev/nbd14' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:10.722 256+0 records in 00:08:10.722 256+0 records out 00:08:10.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011623 s, 90.2 MB/s 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.722 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:10.982 256+0 records in 00:08:10.982 256+0 records out 00:08:10.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135988 s, 7.7 MB/s 00:08:10.982 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.982 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:10.982 256+0 records in 00:08:10.982 256+0 records out 00:08:10.982 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147465 s, 7.1 MB/s 00:08:10.982 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:10.982 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:11.241 256+0 records in 00:08:11.241 256+0 records out 00:08:11.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146521 s, 7.2 MB/s 00:08:11.241 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.241 10:51:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:11.241 256+0 records in 00:08:11.241 256+0 records out 00:08:11.241 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140461 s, 7.5 MB/s 00:08:11.241 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.241 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:11.500 256+0 records in 00:08:11.500 256+0 records out 00:08:11.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144268 s, 7.3 MB/s 00:08:11.500 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.500 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:11.760 256+0 records in 00:08:11.760 256+0 records out 00:08:11.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143283 s, 7.3 MB/s 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:11.760 256+0 records in 00:08:11.760 256+0 records out 00:08:11.760 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14127 s, 7.4 MB/s 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:11.760 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.020 10:51:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.279 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.538 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.797 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:13.056 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.314 10:51:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.314 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:13.573 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:13.832 malloc_lvol_verify 00:08:13.832 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.091 4dc11156-ce16-4e27-ae5a-014a566681ab 00:08:14.091 10:52:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:14.350 12d32f13-6ef6-4993-99d1-07d7fc849326 00:08:14.350 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:14.608 /dev/nbd0 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:14.608 mke2fs 1.47.0 (5-Feb-2023) 00:08:14.608 Discarding device blocks: 0/4096 done 00:08:14.608 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:14.608 00:08:14.608 Allocating group tables: 0/1 done 00:08:14.608 Writing inode tables: 0/1 done 00:08:14.608 Creating journal (1024 blocks): done 00:08:14.608 Writing superblocks and filesystem accounting information: 0/1 done 00:08:14.608 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:14.608 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62677 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62677 ']' 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62677 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62677 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.869 killing process with pid 62677 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62677' 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62677 00:08:14.869 10:52:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62677 00:08:16.288 10:52:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:16.288 00:08:16.288 real 0m12.353s 00:08:16.288 user 0m15.987s 00:08:16.288 sys 0m5.149s 00:08:16.288 10:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.288 10:52:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:16.288 ************************************ 00:08:16.288 END TEST bdev_nbd 00:08:16.288 ************************************ 00:08:16.288 10:52:02 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:16.288 10:52:02 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:16.288 10:52:02 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:16.288 skipping fio tests on NVMe due to multi-ns failures. 00:08:16.288 10:52:02 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:16.288 10:52:02 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:16.288 10:52:02 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.288 10:52:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:16.288 10:52:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.288 10:52:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.288 ************************************ 00:08:16.288 START TEST bdev_verify 00:08:16.288 ************************************ 00:08:16.288 10:52:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:16.288 [2024-11-15 10:52:02.935099] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:16.288 [2024-11-15 10:52:02.935217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63099 ] 00:08:16.288 [2024-11-15 10:52:03.117534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.548 [2024-11-15 10:52:03.229412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.548 [2024-11-15 10:52:03.229441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.117 Running I/O for 5 seconds... 00:08:19.425 20416.00 IOPS, 79.75 MiB/s [2024-11-15T10:52:07.224Z] 19712.00 IOPS, 77.00 MiB/s [2024-11-15T10:52:08.161Z] 19968.00 IOPS, 78.00 MiB/s [2024-11-15T10:52:09.098Z] 20848.00 IOPS, 81.44 MiB/s [2024-11-15T10:52:09.357Z] 21363.20 IOPS, 83.45 MiB/s 00:08:22.496 Latency(us) 00:08:22.496 [2024-11-15T10:52:09.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:22.496 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0xbd0bd 00:08:22.496 Nvme0n1 : 5.10 1507.30 5.89 0.00 0.00 84739.37 19581.84 82117.40 00:08:22.496 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:22.496 Nvme0n1 : 5.07 1500.79 5.86 0.00 0.00 84818.62 13370.40 88855.24 00:08:22.496 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0x4ff80 00:08:22.496 Nvme1n1p1 : 5.10 1506.51 5.88 0.00 0.00 84597.31 18213.22 74958.44 00:08:22.496 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:22.496 Nvme1n1p1 : 5.08 1500.09 5.86 0.00 0.00 84679.45 13107.20 81696.28 00:08:22.496 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0x4ff7f 00:08:22.496 Nvme1n1p2 : 5.10 1505.69 5.88 0.00 0.00 84464.19 18739.61 72010.64 00:08:22.496 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:22.496 Nvme1n1p2 : 5.10 1507.28 5.89 0.00 0.00 84299.24 14528.46 69483.95 00:08:22.496 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0x80000 00:08:22.496 Nvme2n1 : 5.10 1504.60 5.88 0.00 0.00 84378.06 20845.19 73695.10 00:08:22.496 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x80000 length 0x80000 00:08:22.496 Nvme2n1 : 5.10 1506.51 5.88 0.00 0.00 84132.53 15370.69 71168.41 00:08:22.496 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0x80000 00:08:22.496 Nvme2n2 : 5.11 1503.97 5.87 0.00 0.00 84295.93 21266.30 74116.22 00:08:22.496 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x80000 length 0x80000 00:08:22.496 Nvme2n2 : 5.10 1505.64 5.88 0.00 0.00 84002.74 16107.64 73273.99 00:08:22.496 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0x80000 00:08:22.496 Nvme2n3 : 5.11 1503.63 5.87 0.00 0.00 84187.06 20529.35 74116.22 00:08:22.496 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x80000 length 0x80000 00:08:22.496 Nvme2n3 : 5.10 1505.02 5.88 0.00 0.00 83881.25 16949.87 75800.67 00:08:22.496 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x0 length 0x20000 00:08:22.496 Nvme3n1 : 5.11 1503.30 5.87 0.00 0.00 84039.55 17581.55 75379.56 00:08:22.496 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:22.496 Verification LBA range: start 0x20000 length 0x20000 00:08:22.496 Nvme3n1 : 5.10 1504.57 5.88 0.00 0.00 83782.34 14633.74 75379.56 00:08:22.496 [2024-11-15T10:52:09.357Z] =================================================================================================================== 00:08:22.496 [2024-11-15T10:52:09.357Z] Total : 21064.90 82.28 0.00 0.00 84306.45 13107.20 88855.24 00:08:23.876 00:08:23.876 real 0m7.641s 00:08:23.876 user 0m14.149s 00:08:23.876 sys 0m0.283s 00:08:23.876 10:52:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.876 10:52:10 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:23.876 ************************************ 00:08:23.876 END TEST bdev_verify 00:08:23.876 ************************************ 00:08:23.876 10:52:10 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:23.876 10:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:23.876 10:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.876 10:52:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:23.876 ************************************ 00:08:23.876 START TEST bdev_verify_big_io 00:08:23.876 ************************************ 00:08:23.876 10:52:10 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:23.876 [2024-11-15 10:52:10.637041] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:23.876 [2024-11-15 10:52:10.637155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63203 ] 00:08:24.135 [2024-11-15 10:52:10.820307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.135 [2024-11-15 10:52:10.938075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.135 [2024-11-15 10:52:10.938108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.072 Running I/O for 5 seconds... 00:08:28.889 1687.00 IOPS, 105.44 MiB/s [2024-11-15T10:52:16.702Z] 2212.00 IOPS, 138.25 MiB/s [2024-11-15T10:52:17.638Z] 2778.33 IOPS, 173.65 MiB/s [2024-11-15T10:52:17.638Z] 3199.25 IOPS, 199.95 MiB/s 00:08:30.777 Latency(us) 00:08:30.777 [2024-11-15T10:52:17.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.778 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0xbd0b 00:08:30.778 Nvme0n1 : 5.62 136.65 8.54 0.00 0.00 908255.44 24003.55 1489062.14 00:08:30.778 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:30.778 Nvme0n1 : 5.61 157.03 9.81 0.00 0.00 781435.62 33057.52 815278.37 00:08:30.778 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0x4ff8 00:08:30.778 Nvme1n1p1 : 5.58 160.65 10.04 0.00 0.00 761554.26 78327.36 697366.21 00:08:30.778 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:30.778 Nvme1n1p1 : 5.53 161.92 10.12 0.00 0.00 753495.98 86749.66 717579.72 00:08:30.778 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0x4ff7 00:08:30.778 Nvme1n1p2 : 5.62 159.71 9.98 0.00 0.00 743256.47 92645.27 710841.88 00:08:30.778 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:30.778 Nvme1n1p2 : 5.61 164.92 10.31 0.00 0.00 725946.76 78327.36 771482.42 00:08:30.778 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0x8000 00:08:30.778 Nvme2n1 : 5.62 163.57 10.22 0.00 0.00 716337.89 43795.95 687259.45 00:08:30.778 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x8000 length 0x8000 00:08:30.778 Nvme2n1 : 5.66 169.75 10.61 0.00 0.00 694078.55 37479.22 781589.18 00:08:30.778 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0x8000 00:08:30.778 Nvme2n2 : 5.66 169.39 10.59 0.00 0.00 679494.08 34320.86 690628.37 00:08:30.778 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x8000 length 0x8000 00:08:30.778 Nvme2n2 : 5.71 174.21 10.89 0.00 0.00 660714.77 17792.10 791695.94 00:08:30.778 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0x8000 00:08:30.778 Nvme2n3 : 5.72 169.65 10.60 0.00 0.00 664335.82 31794.17 1441897.28 00:08:30.778 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x8000 length 0x8000 00:08:30.778 Nvme2n3 : 5.76 178.24 11.14 0.00 0.00 630644.95 36005.32 801802.69 00:08:30.778 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x0 length 0x2000 00:08:30.778 Nvme3n1 : 5.77 186.20 11.64 0.00 0.00 594595.30 6606.24 1468848.63 00:08:30.778 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:30.778 Verification LBA range: start 0x2000 length 0x2000 00:08:30.778 Nvme3n1 : 5.79 195.32 12.21 0.00 0.00 568545.41 6316.72 751268.91 00:08:30.778 [2024-11-15T10:52:17.639Z] =================================================================================================================== 00:08:30.778 [2024-11-15T10:52:17.639Z] Total : 2347.20 146.70 0.00 0.00 698543.38 6316.72 1489062.14 00:08:33.312 00:08:33.312 real 0m9.024s 00:08:33.312 user 0m16.838s 00:08:33.312 sys 0m0.343s 00:08:33.312 10:52:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.312 ************************************ 00:08:33.312 END TEST bdev_verify_big_io 00:08:33.312 ************************************ 00:08:33.312 10:52:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:33.312 10:52:19 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.312 10:52:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:33.312 10:52:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.312 10:52:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.312 ************************************ 00:08:33.312 START TEST bdev_write_zeroes 00:08:33.312 ************************************ 00:08:33.312 10:52:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:33.312 [2024-11-15 10:52:19.701663] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:33.312 [2024-11-15 10:52:19.701788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63323 ] 00:08:33.312 [2024-11-15 10:52:19.884100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.312 [2024-11-15 10:52:20.003008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.879 Running I/O for 1 seconds... 00:08:35.251 69888.00 IOPS, 273.00 MiB/s 00:08:35.251 Latency(us) 00:08:35.251 [2024-11-15T10:52:22.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.251 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme0n1 : 1.02 9981.86 38.99 0.00 0.00 12795.43 10527.87 33689.19 00:08:35.251 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme1n1p1 : 1.02 9970.88 38.95 0.00 0.00 12792.16 10738.43 34741.98 00:08:35.251 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme1n1p2 : 1.02 9960.38 38.91 0.00 0.00 12776.05 10264.67 33899.75 00:08:35.251 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme2n1 : 1.02 9951.34 38.87 0.00 0.00 12749.50 10527.87 32636.40 00:08:35.251 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme2n2 : 1.02 9942.37 38.84 0.00 0.00 12735.70 10527.87 32215.29 00:08:35.251 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme2n3 : 1.02 9933.30 38.80 0.00 0.00 12678.90 10422.59 28214.70 00:08:35.251 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:35.251 Nvme3n1 : 1.03 9983.22 39.00 0.00 0.00 12608.22 6606.24 25477.45 00:08:35.251 [2024-11-15T10:52:22.112Z] =================================================================================================================== 00:08:35.251 [2024-11-15T10:52:22.112Z] Total : 69723.35 272.36 0.00 0.00 12733.60 6606.24 34741.98 00:08:36.187 00:08:36.187 real 0m3.237s 00:08:36.187 user 0m2.850s 00:08:36.187 sys 0m0.273s 00:08:36.187 10:52:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.187 10:52:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:36.187 ************************************ 00:08:36.188 END TEST bdev_write_zeroes 00:08:36.188 ************************************ 00:08:36.188 10:52:22 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.188 10:52:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:36.188 10:52:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.188 10:52:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.188 ************************************ 00:08:36.188 START TEST bdev_json_nonenclosed 00:08:36.188 ************************************ 00:08:36.188 10:52:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.188 [2024-11-15 10:52:23.033491] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:36.188 [2024-11-15 10:52:23.033618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63376 ] 00:08:36.446 [2024-11-15 10:52:23.214204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.706 [2024-11-15 10:52:23.323715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.706 [2024-11-15 10:52:23.323816] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:36.706 [2024-11-15 10:52:23.323839] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:36.706 [2024-11-15 10:52:23.323852] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.970 00:08:36.970 real 0m0.635s 00:08:36.970 user 0m0.405s 00:08:36.970 sys 0m0.126s 00:08:36.970 ************************************ 00:08:36.970 END TEST bdev_json_nonenclosed 00:08:36.970 ************************************ 00:08:36.970 10:52:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.970 10:52:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:36.970 10:52:23 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.970 10:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:36.970 10:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.970 10:52:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:36.970 ************************************ 00:08:36.970 START TEST bdev_json_nonarray 00:08:36.970 ************************************ 00:08:36.970 10:52:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:36.970 [2024-11-15 10:52:23.737111] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:36.970 [2024-11-15 10:52:23.737242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63402 ] 00:08:37.229 [2024-11-15 10:52:23.916923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.229 [2024-11-15 10:52:24.039161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.229 [2024-11-15 10:52:24.039264] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:37.229 [2024-11-15 10:52:24.039288] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:37.229 [2024-11-15 10:52:24.039299] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.488 00:08:37.488 real 0m0.652s 00:08:37.488 user 0m0.410s 00:08:37.488 sys 0m0.136s 00:08:37.488 10:52:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.488 10:52:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:37.488 ************************************ 00:08:37.488 END TEST bdev_json_nonarray 00:08:37.488 ************************************ 00:08:37.748 10:52:24 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:37.748 10:52:24 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:37.748 10:52:24 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:37.748 10:52:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.748 10:52:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.748 10:52:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:37.748 ************************************ 00:08:37.748 START TEST bdev_gpt_uuid 00:08:37.748 ************************************ 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63427 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63427 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63427 ']' 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.748 10:52:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:37.748 [2024-11-15 10:52:24.486626] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:37.748 [2024-11-15 10:52:24.486776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63427 ] 00:08:38.014 [2024-11-15 10:52:24.667969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.014 [2024-11-15 10:52:24.777646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.996 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.996 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:38.996 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:38.996 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.996 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.256 Some configs were skipped because the RPC state that can call them passed over. 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.256 10:52:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:39.256 { 00:08:39.256 "name": "Nvme1n1p1", 00:08:39.256 "aliases": [ 00:08:39.256 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:39.256 ], 00:08:39.256 "product_name": "GPT Disk", 00:08:39.256 "block_size": 4096, 00:08:39.256 "num_blocks": 655104, 00:08:39.256 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:39.256 "assigned_rate_limits": { 00:08:39.256 "rw_ios_per_sec": 0, 00:08:39.256 "rw_mbytes_per_sec": 0, 00:08:39.256 "r_mbytes_per_sec": 0, 00:08:39.256 "w_mbytes_per_sec": 0 00:08:39.256 }, 00:08:39.256 "claimed": false, 00:08:39.256 "zoned": false, 00:08:39.256 "supported_io_types": { 00:08:39.256 "read": true, 00:08:39.256 "write": true, 00:08:39.256 "unmap": true, 00:08:39.256 "flush": true, 00:08:39.256 "reset": true, 00:08:39.256 "nvme_admin": false, 00:08:39.256 "nvme_io": false, 00:08:39.256 "nvme_io_md": false, 00:08:39.256 "write_zeroes": true, 00:08:39.256 "zcopy": false, 00:08:39.256 "get_zone_info": false, 00:08:39.256 "zone_management": false, 00:08:39.256 "zone_append": false, 00:08:39.256 "compare": true, 00:08:39.256 "compare_and_write": false, 00:08:39.256 "abort": true, 00:08:39.256 "seek_hole": false, 00:08:39.256 "seek_data": false, 00:08:39.256 "copy": true, 00:08:39.256 "nvme_iov_md": false 00:08:39.256 }, 00:08:39.256 "driver_specific": { 00:08:39.256 "gpt": { 00:08:39.256 "base_bdev": "Nvme1n1", 00:08:39.256 "offset_blocks": 256, 00:08:39.256 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:39.256 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:39.256 "partition_name": "SPDK_TEST_first" 00:08:39.256 } 00:08:39.256 } 00:08:39.256 } 00:08:39.256 ]' 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:39.256 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:39.517 { 00:08:39.517 "name": "Nvme1n1p2", 00:08:39.517 "aliases": [ 00:08:39.517 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:39.517 ], 00:08:39.517 "product_name": "GPT Disk", 00:08:39.517 "block_size": 4096, 00:08:39.517 "num_blocks": 655103, 00:08:39.517 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:39.517 "assigned_rate_limits": { 00:08:39.517 "rw_ios_per_sec": 0, 00:08:39.517 "rw_mbytes_per_sec": 0, 00:08:39.517 "r_mbytes_per_sec": 0, 00:08:39.517 "w_mbytes_per_sec": 0 00:08:39.517 }, 00:08:39.517 "claimed": false, 00:08:39.517 "zoned": false, 00:08:39.517 "supported_io_types": { 00:08:39.517 "read": true, 00:08:39.517 "write": true, 00:08:39.517 "unmap": true, 00:08:39.517 "flush": true, 00:08:39.517 "reset": true, 00:08:39.517 "nvme_admin": false, 00:08:39.517 "nvme_io": false, 00:08:39.517 "nvme_io_md": false, 00:08:39.517 "write_zeroes": true, 00:08:39.517 "zcopy": false, 00:08:39.517 "get_zone_info": false, 00:08:39.517 "zone_management": false, 00:08:39.517 "zone_append": false, 00:08:39.517 "compare": true, 00:08:39.517 "compare_and_write": false, 00:08:39.517 "abort": true, 00:08:39.517 "seek_hole": false, 00:08:39.517 "seek_data": false, 00:08:39.517 "copy": true, 00:08:39.517 "nvme_iov_md": false 00:08:39.517 }, 00:08:39.517 "driver_specific": { 00:08:39.517 "gpt": { 00:08:39.517 "base_bdev": "Nvme1n1", 00:08:39.517 "offset_blocks": 655360, 00:08:39.517 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:39.517 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:39.517 "partition_name": "SPDK_TEST_second" 00:08:39.517 } 00:08:39.517 } 00:08:39.517 } 00:08:39.517 ]' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63427 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63427 ']' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63427 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63427 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.517 killing process with pid 63427 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63427' 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63427 00:08:39.517 10:52:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63427 00:08:42.052 00:08:42.052 real 0m4.342s 00:08:42.052 user 0m4.440s 00:08:42.052 sys 0m0.552s 00:08:42.052 10:52:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.052 10:52:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:42.052 ************************************ 00:08:42.052 END TEST bdev_gpt_uuid 00:08:42.052 ************************************ 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:42.052 10:52:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:42.621 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.880 Waiting for block devices as requested 00:08:42.880 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.139 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.139 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:43.139 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:48.413 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:48.413 10:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:48.413 10:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:48.671 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:48.671 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:48.671 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:48.671 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:48.671 10:52:35 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:48.671 00:08:48.671 real 1m4.895s 00:08:48.671 user 1m20.732s 00:08:48.671 sys 0m11.898s 00:08:48.671 ************************************ 00:08:48.671 END TEST blockdev_nvme_gpt 00:08:48.671 ************************************ 00:08:48.671 10:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.671 10:52:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:48.671 10:52:35 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:48.671 10:52:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.671 10:52:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.671 10:52:35 -- common/autotest_common.sh@10 -- # set +x 00:08:48.671 ************************************ 00:08:48.671 START TEST nvme 00:08:48.671 ************************************ 00:08:48.671 10:52:35 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:48.671 * Looking for test storage... 00:08:48.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:48.930 10:52:35 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:48.931 10:52:35 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:48.931 10:52:35 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:48.931 10:52:35 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:48.931 10:52:35 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:48.931 10:52:35 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:48.931 10:52:35 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:48.931 10:52:35 nvme -- scripts/common.sh@345 -- # : 1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:48.931 10:52:35 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:48.931 10:52:35 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@353 -- # local d=1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:48.931 10:52:35 nvme -- scripts/common.sh@355 -- # echo 1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:48.931 10:52:35 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@353 -- # local d=2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:48.931 10:52:35 nvme -- scripts/common.sh@355 -- # echo 2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:48.931 10:52:35 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:48.931 10:52:35 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:48.931 10:52:35 nvme -- scripts/common.sh@368 -- # return 0 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:48.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.931 --rc genhtml_branch_coverage=1 00:08:48.931 --rc genhtml_function_coverage=1 00:08:48.931 --rc genhtml_legend=1 00:08:48.931 --rc geninfo_all_blocks=1 00:08:48.931 --rc geninfo_unexecuted_blocks=1 00:08:48.931 00:08:48.931 ' 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:48.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.931 --rc genhtml_branch_coverage=1 00:08:48.931 --rc genhtml_function_coverage=1 00:08:48.931 --rc genhtml_legend=1 00:08:48.931 --rc geninfo_all_blocks=1 00:08:48.931 --rc geninfo_unexecuted_blocks=1 00:08:48.931 00:08:48.931 ' 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:48.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.931 --rc genhtml_branch_coverage=1 00:08:48.931 --rc genhtml_function_coverage=1 00:08:48.931 --rc genhtml_legend=1 00:08:48.931 --rc geninfo_all_blocks=1 00:08:48.931 --rc geninfo_unexecuted_blocks=1 00:08:48.931 00:08:48.931 ' 00:08:48.931 10:52:35 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:48.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:48.931 --rc genhtml_branch_coverage=1 00:08:48.931 --rc genhtml_function_coverage=1 00:08:48.931 --rc genhtml_legend=1 00:08:48.931 --rc geninfo_all_blocks=1 00:08:48.931 --rc geninfo_unexecuted_blocks=1 00:08:48.931 00:08:48.931 ' 00:08:48.931 10:52:35 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:49.508 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:50.446 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.446 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.446 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.446 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:50.446 10:52:37 nvme -- nvme/nvme.sh@79 -- # uname 00:08:50.446 10:52:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:50.446 10:52:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:50.446 10:52:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1075 -- # stubpid=64092 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:50.446 Waiting for stub to ready for secondary processes... 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64092 ]] 00:08:50.446 10:52:37 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:50.706 [2024-11-15 10:52:37.332815] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:08:50.706 [2024-11-15 10:52:37.332935] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:51.645 10:52:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:51.645 10:52:38 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64092 ]] 00:08:51.645 10:52:38 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:51.645 [2024-11-15 10:52:38.335658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:51.645 [2024-11-15 10:52:38.445692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.645 [2024-11-15 10:52:38.445847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.645 [2024-11-15 10:52:38.445881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.645 [2024-11-15 10:52:38.464026] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:51.645 [2024-11-15 10:52:38.464063] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.645 [2024-11-15 10:52:38.479590] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:51.645 [2024-11-15 10:52:38.479710] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:51.645 [2024-11-15 10:52:38.483044] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.645 [2024-11-15 10:52:38.483230] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:51.645 [2024-11-15 10:52:38.483301] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:51.645 [2024-11-15 10:52:38.486307] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.645 [2024-11-15 10:52:38.486477] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:51.645 [2024-11-15 10:52:38.486575] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:51.645 [2024-11-15 10:52:38.489642] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:51.645 [2024-11-15 10:52:38.489842] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:51.645 [2024-11-15 10:52:38.489930] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:51.645 [2024-11-15 10:52:38.489985] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:51.645 [2024-11-15 10:52:38.490039] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:52.583 10:52:39 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:52.583 done. 00:08:52.583 10:52:39 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:52.583 10:52:39 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.583 10:52:39 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:52.583 10:52:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.583 10:52:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.583 ************************************ 00:08:52.583 START TEST nvme_reset 00:08:52.583 ************************************ 00:08:52.583 10:52:39 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:52.841 Initializing NVMe Controllers 00:08:52.841 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:52.841 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:52.841 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:52.841 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:52.842 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:52.842 00:08:52.842 real 0m0.304s 00:08:52.842 user 0m0.113s 00:08:52.842 sys 0m0.148s 00:08:52.842 10:52:39 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.842 10:52:39 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:52.842 ************************************ 00:08:52.842 END TEST nvme_reset 00:08:52.842 ************************************ 00:08:52.842 10:52:39 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:52.842 10:52:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.842 10:52:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.842 10:52:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.842 ************************************ 00:08:52.842 START TEST nvme_identify 00:08:52.842 ************************************ 00:08:52.842 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:52.842 10:52:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:52.842 10:52:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:52.842 10:52:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:52.842 10:52:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:52.842 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:52.842 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:52.842 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:52.842 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:52.842 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:53.100 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:53.100 10:52:39 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:53.100 10:52:39 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:53.362 [2024-11-15 10:52:40.028625] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64126 terminated unexpected 00:08:53.362 ===================================================== 00:08:53.362 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.362 ===================================================== 00:08:53.362 Controller Capabilities/Features 00:08:53.362 ================================ 00:08:53.362 Vendor ID: 1b36 00:08:53.362 Subsystem Vendor ID: 1af4 00:08:53.362 Serial Number: 12340 00:08:53.362 Model Number: QEMU NVMe Ctrl 00:08:53.362 Firmware Version: 8.0.0 00:08:53.362 Recommended Arb Burst: 6 00:08:53.362 IEEE OUI Identifier: 00 54 52 00:08:53.362 Multi-path I/O 00:08:53.362 May have multiple subsystem ports: No 00:08:53.362 May have multiple controllers: No 00:08:53.362 Associated with SR-IOV VF: No 00:08:53.362 Max Data Transfer Size: 524288 00:08:53.362 Max Number of Namespaces: 256 00:08:53.362 Max Number of I/O Queues: 64 00:08:53.362 NVMe Specification Version (VS): 1.4 00:08:53.362 NVMe Specification Version (Identify): 1.4 00:08:53.362 Maximum Queue Entries: 2048 00:08:53.362 Contiguous Queues Required: Yes 00:08:53.362 Arbitration Mechanisms Supported 00:08:53.362 Weighted Round Robin: Not Supported 00:08:53.362 Vendor Specific: Not Supported 00:08:53.362 Reset Timeout: 7500 ms 00:08:53.362 Doorbell Stride: 4 bytes 00:08:53.362 NVM Subsystem Reset: Not Supported 00:08:53.362 Command Sets Supported 00:08:53.362 NVM Command Set: Supported 00:08:53.362 Boot Partition: Not Supported 00:08:53.362 Memory Page Size Minimum: 4096 bytes 00:08:53.362 Memory Page Size Maximum: 65536 bytes 00:08:53.362 Persistent Memory Region: Not Supported 00:08:53.362 Optional Asynchronous Events Supported 00:08:53.362 Namespace Attribute Notices: Supported 00:08:53.362 Firmware Activation Notices: Not Supported 00:08:53.362 ANA Change Notices: Not Supported 00:08:53.362 PLE Aggregate Log Change Notices: Not Supported 00:08:53.362 LBA Status Info Alert Notices: Not Supported 00:08:53.362 EGE Aggregate Log Change Notices: Not Supported 00:08:53.362 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.362 Zone Descriptor Change Notices: Not Supported 00:08:53.362 Discovery Log Change Notices: Not Supported 00:08:53.362 Controller Attributes 00:08:53.362 128-bit Host Identifier: Not Supported 00:08:53.362 Non-Operational Permissive Mode: Not Supported 00:08:53.362 NVM Sets: Not Supported 00:08:53.362 Read Recovery Levels: Not Supported 00:08:53.362 Endurance Groups: Not Supported 00:08:53.362 Predictable Latency Mode: Not Supported 00:08:53.362 Traffic Based Keep ALive: Not Supported 00:08:53.362 Namespace Granularity: Not Supported 00:08:53.362 SQ Associations: Not Supported 00:08:53.363 UUID List: Not Supported 00:08:53.363 Multi-Domain Subsystem: Not Supported 00:08:53.363 Fixed Capacity Management: Not Supported 00:08:53.363 Variable Capacity Management: Not Supported 00:08:53.363 Delete Endurance Group: Not Supported 00:08:53.363 Delete NVM Set: Not Supported 00:08:53.363 Extended LBA Formats Supported: Supported 00:08:53.363 Flexible Data Placement Supported: Not Supported 00:08:53.363 00:08:53.363 Controller Memory Buffer Support 00:08:53.363 ================================ 00:08:53.363 Supported: No 00:08:53.363 00:08:53.363 Persistent Memory Region Support 00:08:53.363 ================================ 00:08:53.363 Supported: No 00:08:53.363 00:08:53.363 Admin Command Set Attributes 00:08:53.363 ============================ 00:08:53.363 Security Send/Receive: Not Supported 00:08:53.363 Format NVM: Supported 00:08:53.363 Firmware Activate/Download: Not Supported 00:08:53.363 Namespace Management: Supported 00:08:53.363 Device Self-Test: Not Supported 00:08:53.363 Directives: Supported 00:08:53.363 NVMe-MI: Not Supported 00:08:53.363 Virtualization Management: Not Supported 00:08:53.363 Doorbell Buffer Config: Supported 00:08:53.363 Get LBA Status Capability: Not Supported 00:08:53.363 Command & Feature Lockdown Capability: Not Supported 00:08:53.363 Abort Command Limit: 4 00:08:53.363 Async Event Request Limit: 4 00:08:53.363 Number of Firmware Slots: N/A 00:08:53.363 Firmware Slot 1 Read-Only: N/A 00:08:53.363 Firmware Activation Without Reset: N/A 00:08:53.363 Multiple Update Detection Support: N/A 00:08:53.363 Firmware Update Granularity: No Information Provided 00:08:53.363 Per-Namespace SMART Log: Yes 00:08:53.363 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.363 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:53.363 Command Effects Log Page: Supported 00:08:53.363 Get Log Page Extended Data: Supported 00:08:53.363 Telemetry Log Pages: Not Supported 00:08:53.363 Persistent Event Log Pages: Not Supported 00:08:53.363 Supported Log Pages Log Page: May Support 00:08:53.363 Commands Supported & Effects Log Page: Not Supported 00:08:53.363 Feature Identifiers & Effects Log Page:May Support 00:08:53.363 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.363 Data Area 4 for Telemetry Log: Not Supported 00:08:53.363 Error Log Page Entries Supported: 1 00:08:53.363 Keep Alive: Not Supported 00:08:53.363 00:08:53.363 NVM Command Set Attributes 00:08:53.363 ========================== 00:08:53.363 Submission Queue Entry Size 00:08:53.363 Max: 64 00:08:53.363 Min: 64 00:08:53.363 Completion Queue Entry Size 00:08:53.363 Max: 16 00:08:53.363 Min: 16 00:08:53.363 Number of Namespaces: 256 00:08:53.363 Compare Command: Supported 00:08:53.363 Write Uncorrectable Command: Not Supported 00:08:53.363 Dataset Management Command: Supported 00:08:53.363 Write Zeroes Command: Supported 00:08:53.363 Set Features Save Field: Supported 00:08:53.363 Reservations: Not Supported 00:08:53.363 Timestamp: Supported 00:08:53.363 Copy: Supported 00:08:53.363 Volatile Write Cache: Present 00:08:53.363 Atomic Write Unit (Normal): 1 00:08:53.363 Atomic Write Unit (PFail): 1 00:08:53.363 Atomic Compare & Write Unit: 1 00:08:53.363 Fused Compare & Write: Not Supported 00:08:53.363 Scatter-Gather List 00:08:53.363 SGL Command Set: Supported 00:08:53.363 SGL Keyed: Not Supported 00:08:53.363 SGL Bit Bucket Descriptor: Not Supported 00:08:53.363 SGL Metadata Pointer: Not Supported 00:08:53.363 Oversized SGL: Not Supported 00:08:53.363 SGL Metadata Address: Not Supported 00:08:53.363 SGL Offset: Not Supported 00:08:53.363 Transport SGL Data Block: Not Supported 00:08:53.363 Replay Protected Memory Block: Not Supported 00:08:53.363 00:08:53.363 Firmware Slot Information 00:08:53.363 ========================= 00:08:53.363 Active slot: 1 00:08:53.363 Slot 1 Firmware Revision: 1.0 00:08:53.363 00:08:53.363 00:08:53.363 Commands Supported and Effects 00:08:53.363 ============================== 00:08:53.363 Admin Commands 00:08:53.363 -------------- 00:08:53.363 Delete I/O Submission Queue (00h): Supported 00:08:53.363 Create I/O Submission Queue (01h): Supported 00:08:53.363 Get Log Page (02h): Supported 00:08:53.363 Delete I/O Completion Queue (04h): Supported 00:08:53.363 Create I/O Completion Queue (05h): Supported 00:08:53.363 Identify (06h): Supported 00:08:53.363 Abort (08h): Supported 00:08:53.363 Set Features (09h): Supported 00:08:53.363 Get Features (0Ah): Supported 00:08:53.363 Asynchronous Event Request (0Ch): Supported 00:08:53.363 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.363 Directive Send (19h): Supported 00:08:53.363 Directive Receive (1Ah): Supported 00:08:53.363 Virtualization Management (1Ch): Supported 00:08:53.363 Doorbell Buffer Config (7Ch): Supported 00:08:53.363 Format NVM (80h): Supported LBA-Change 00:08:53.363 I/O Commands 00:08:53.363 ------------ 00:08:53.363 Flush (00h): Supported LBA-Change 00:08:53.363 Write (01h): Supported LBA-Change 00:08:53.363 Read (02h): Supported 00:08:53.363 Compare (05h): Supported 00:08:53.363 Write Zeroes (08h): Supported LBA-Change 00:08:53.363 Dataset Management (09h): Supported LBA-Change 00:08:53.363 Unknown (0Ch): Supported 00:08:53.363 Unknown (12h): Supported 00:08:53.363 Copy (19h): Supported LBA-Change 00:08:53.363 Unknown (1Dh): Supported LBA-Change 00:08:53.363 00:08:53.363 Error Log 00:08:53.363 ========= 00:08:53.363 00:08:53.363 Arbitration 00:08:53.363 =========== 00:08:53.363 Arbitration Burst: no limit 00:08:53.363 00:08:53.363 Power Management 00:08:53.363 ================ 00:08:53.363 Number of Power States: 1 00:08:53.363 Current Power State: Power State #0 00:08:53.363 Power State #0: 00:08:53.363 Max Power: 25.00 W 00:08:53.363 Non-Operational State: Operational 00:08:53.363 Entry Latency: 16 microseconds 00:08:53.363 Exit Latency: 4 microseconds 00:08:53.363 Relative Read Throughput: 0 00:08:53.363 Relative Read Latency: 0 00:08:53.363 Relative Write Throughput: 0 00:08:53.363 Relative Write Latency: 0 00:08:53.363 Idle Power[2024-11-15 10:52:40.030013] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64126 terminated unexpected 00:08:53.363 : Not Reported 00:08:53.363 Active Power: Not Reported 00:08:53.363 Non-Operational Permissive Mode: Not Supported 00:08:53.363 00:08:53.363 Health Information 00:08:53.363 ================== 00:08:53.363 Critical Warnings: 00:08:53.363 Available Spare Space: OK 00:08:53.363 Temperature: OK 00:08:53.363 Device Reliability: OK 00:08:53.363 Read Only: No 00:08:53.364 Volatile Memory Backup: OK 00:08:53.364 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.364 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.364 Available Spare: 0% 00:08:53.364 Available Spare Threshold: 0% 00:08:53.364 Life Percentage Used: 0% 00:08:53.364 Data Units Read: 795 00:08:53.364 Data Units Written: 723 00:08:53.364 Host Read Commands: 38867 00:08:53.364 Host Write Commands: 38653 00:08:53.364 Controller Busy Time: 0 minutes 00:08:53.364 Power Cycles: 0 00:08:53.364 Power On Hours: 0 hours 00:08:53.364 Unsafe Shutdowns: 0 00:08:53.364 Unrecoverable Media Errors: 0 00:08:53.364 Lifetime Error Log Entries: 0 00:08:53.364 Warning Temperature Time: 0 minutes 00:08:53.364 Critical Temperature Time: 0 minutes 00:08:53.364 00:08:53.364 Number of Queues 00:08:53.364 ================ 00:08:53.364 Number of I/O Submission Queues: 64 00:08:53.364 Number of I/O Completion Queues: 64 00:08:53.364 00:08:53.364 ZNS Specific Controller Data 00:08:53.364 ============================ 00:08:53.364 Zone Append Size Limit: 0 00:08:53.364 00:08:53.364 00:08:53.364 Active Namespaces 00:08:53.364 ================= 00:08:53.364 Namespace ID:1 00:08:53.364 Error Recovery Timeout: Unlimited 00:08:53.364 Command Set Identifier: NVM (00h) 00:08:53.364 Deallocate: Supported 00:08:53.364 Deallocated/Unwritten Error: Supported 00:08:53.364 Deallocated Read Value: All 0x00 00:08:53.364 Deallocate in Write Zeroes: Not Supported 00:08:53.364 Deallocated Guard Field: 0xFFFF 00:08:53.364 Flush: Supported 00:08:53.364 Reservation: Not Supported 00:08:53.364 Metadata Transferred as: Separate Metadata Buffer 00:08:53.364 Namespace Sharing Capabilities: Private 00:08:53.364 Size (in LBAs): 1548666 (5GiB) 00:08:53.364 Capacity (in LBAs): 1548666 (5GiB) 00:08:53.364 Utilization (in LBAs): 1548666 (5GiB) 00:08:53.364 Thin Provisioning: Not Supported 00:08:53.364 Per-NS Atomic Units: No 00:08:53.364 Maximum Single Source Range Length: 128 00:08:53.364 Maximum Copy Length: 128 00:08:53.364 Maximum Source Range Count: 128 00:08:53.364 NGUID/EUI64 Never Reused: No 00:08:53.364 Namespace Write Protected: No 00:08:53.364 Number of LBA Formats: 8 00:08:53.364 Current LBA Format: LBA Format #07 00:08:53.364 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.364 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.364 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.364 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.364 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.364 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.364 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.364 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.364 00:08:53.364 NVM Specific Namespace Data 00:08:53.364 =========================== 00:08:53.364 Logical Block Storage Tag Mask: 0 00:08:53.364 Protection Information Capabilities: 00:08:53.364 16b Guard Protection Information Storage Tag Support: No 00:08:53.364 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.364 Storage Tag Check Read Support: No 00:08:53.364 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.364 ===================================================== 00:08:53.364 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.364 ===================================================== 00:08:53.364 Controller Capabilities/Features 00:08:53.364 ================================ 00:08:53.364 Vendor ID: 1b36 00:08:53.364 Subsystem Vendor ID: 1af4 00:08:53.364 Serial Number: 12341 00:08:53.364 Model Number: QEMU NVMe Ctrl 00:08:53.364 Firmware Version: 8.0.0 00:08:53.364 Recommended Arb Burst: 6 00:08:53.364 IEEE OUI Identifier: 00 54 52 00:08:53.364 Multi-path I/O 00:08:53.364 May have multiple subsystem ports: No 00:08:53.364 May have multiple controllers: No 00:08:53.364 Associated with SR-IOV VF: No 00:08:53.364 Max Data Transfer Size: 524288 00:08:53.364 Max Number of Namespaces: 256 00:08:53.364 Max Number of I/O Queues: 64 00:08:53.364 NVMe Specification Version (VS): 1.4 00:08:53.364 NVMe Specification Version (Identify): 1.4 00:08:53.364 Maximum Queue Entries: 2048 00:08:53.364 Contiguous Queues Required: Yes 00:08:53.364 Arbitration Mechanisms Supported 00:08:53.364 Weighted Round Robin: Not Supported 00:08:53.364 Vendor Specific: Not Supported 00:08:53.364 Reset Timeout: 7500 ms 00:08:53.364 Doorbell Stride: 4 bytes 00:08:53.364 NVM Subsystem Reset: Not Supported 00:08:53.364 Command Sets Supported 00:08:53.364 NVM Command Set: Supported 00:08:53.364 Boot Partition: Not Supported 00:08:53.364 Memory Page Size Minimum: 4096 bytes 00:08:53.364 Memory Page Size Maximum: 65536 bytes 00:08:53.364 Persistent Memory Region: Not Supported 00:08:53.364 Optional Asynchronous Events Supported 00:08:53.364 Namespace Attribute Notices: Supported 00:08:53.364 Firmware Activation Notices: Not Supported 00:08:53.364 ANA Change Notices: Not Supported 00:08:53.364 PLE Aggregate Log Change Notices: Not Supported 00:08:53.364 LBA Status Info Alert Notices: Not Supported 00:08:53.364 EGE Aggregate Log Change Notices: Not Supported 00:08:53.364 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.364 Zone Descriptor Change Notices: Not Supported 00:08:53.364 Discovery Log Change Notices: Not Supported 00:08:53.364 Controller Attributes 00:08:53.364 128-bit Host Identifier: Not Supported 00:08:53.364 Non-Operational Permissive Mode: Not Supported 00:08:53.364 NVM Sets: Not Supported 00:08:53.364 Read Recovery Levels: Not Supported 00:08:53.364 Endurance Groups: Not Supported 00:08:53.364 Predictable Latency Mode: Not Supported 00:08:53.364 Traffic Based Keep ALive: Not Supported 00:08:53.364 Namespace Granularity: Not Supported 00:08:53.364 SQ Associations: Not Supported 00:08:53.364 UUID List: Not Supported 00:08:53.364 Multi-Domain Subsystem: Not Supported 00:08:53.364 Fixed Capacity Management: Not Supported 00:08:53.364 Variable Capacity Management: Not Supported 00:08:53.364 Delete Endurance Group: Not Supported 00:08:53.364 Delete NVM Set: Not Supported 00:08:53.364 Extended LBA Formats Supported: Supported 00:08:53.364 Flexible Data Placement Supported: Not Supported 00:08:53.364 00:08:53.364 Controller Memory Buffer Support 00:08:53.364 ================================ 00:08:53.364 Supported: No 00:08:53.364 00:08:53.364 Persistent Memory Region Support 00:08:53.364 ================================ 00:08:53.364 Supported: No 00:08:53.364 00:08:53.364 Admin Command Set Attributes 00:08:53.364 ============================ 00:08:53.364 Security Send/Receive: Not Supported 00:08:53.364 Format NVM: Supported 00:08:53.364 Firmware Activate/Download: Not Supported 00:08:53.364 Namespace Management: Supported 00:08:53.364 Device Self-Test: Not Supported 00:08:53.364 Directives: Supported 00:08:53.364 NVMe-MI: Not Supported 00:08:53.364 Virtualization Management: Not Supported 00:08:53.364 Doorbell Buffer Config: Supported 00:08:53.364 Get LBA Status Capability: Not Supported 00:08:53.364 Command & Feature Lockdown Capability: Not Supported 00:08:53.364 Abort Command Limit: 4 00:08:53.364 Async Event Request Limit: 4 00:08:53.364 Number of Firmware Slots: N/A 00:08:53.365 Firmware Slot 1 Read-Only: N/A 00:08:53.365 Firmware Activation Without Reset: N/A 00:08:53.365 Multiple Update Detection Support: N/A 00:08:53.365 Firmware Update Granularity: No Information Provided 00:08:53.365 Per-Namespace SMART Log: Yes 00:08:53.365 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.365 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:53.365 Command Effects Log Page: Supported 00:08:53.365 Get Log Page Extended Data: Supported 00:08:53.365 Telemetry Log Pages: Not Supported 00:08:53.365 Persistent Event Log Pages: Not Supported 00:08:53.365 Supported Log Pages Log Page: May Support 00:08:53.365 Commands Supported & Effects Log Page: Not Supported 00:08:53.365 Feature Identifiers & Effects Log Page:May Support 00:08:53.365 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.365 Data Area 4 for Telemetry Log: Not Supported 00:08:53.365 Error Log Page Entries Supported: 1 00:08:53.365 Keep Alive: Not Supported 00:08:53.365 00:08:53.365 NVM Command Set Attributes 00:08:53.365 ========================== 00:08:53.365 Submission Queue Entry Size 00:08:53.365 Max: 64 00:08:53.365 Min: 64 00:08:53.365 Completion Queue Entry Size 00:08:53.365 Max: 16 00:08:53.365 Min: 16 00:08:53.365 Number of Namespaces: 256 00:08:53.365 Compare Command: Supported 00:08:53.365 Write Uncorrectable Command: Not Supported 00:08:53.365 Dataset Management Command: Supported 00:08:53.365 Write Zeroes Command: Supported 00:08:53.365 Set Features Save Field: Supported 00:08:53.365 Reservations: Not Supported 00:08:53.365 Timestamp: Supported 00:08:53.365 Copy: Supported 00:08:53.365 Volatile Write Cache: Present 00:08:53.365 Atomic Write Unit (Normal): 1 00:08:53.365 Atomic Write Unit (PFail): 1 00:08:53.365 Atomic Compare & Write Unit: 1 00:08:53.365 Fused Compare & Write: Not Supported 00:08:53.365 Scatter-Gather List 00:08:53.365 SGL Command Set: Supported 00:08:53.365 SGL Keyed: Not Supported 00:08:53.365 SGL Bit Bucket Descriptor: Not Supported 00:08:53.365 SGL Metadata Pointer: Not Supported 00:08:53.365 Oversized SGL: Not Supported 00:08:53.365 SGL Metadata Address: Not Supported 00:08:53.365 SGL Offset: Not Supported 00:08:53.365 Transport SGL Data Block: Not Supported 00:08:53.365 Replay Protected Memory Block: Not Supported 00:08:53.365 00:08:53.365 Firmware Slot Information 00:08:53.365 ========================= 00:08:53.365 Active slot: 1 00:08:53.365 Slot 1 Firmware Revision: 1.0 00:08:53.365 00:08:53.365 00:08:53.365 Commands Supported and Effects 00:08:53.365 ============================== 00:08:53.365 Admin Commands 00:08:53.365 -------------- 00:08:53.365 Delete I/O Submission Queue (00h): Supported 00:08:53.365 Create I/O Submission Queue (01h): Supported 00:08:53.365 Get Log Page (02h): Supported 00:08:53.365 Delete I/O Completion Queue (04h): Supported 00:08:53.365 Create I/O Completion Queue (05h): Supported 00:08:53.365 Identify (06h): Supported 00:08:53.365 Abort (08h): Supported 00:08:53.365 Set Features (09h): Supported 00:08:53.365 Get Features (0Ah): Supported 00:08:53.365 Asynchronous Event Request (0Ch): Supported 00:08:53.365 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.365 Directive Send (19h): Supported 00:08:53.365 Directive Receive (1Ah): Supported 00:08:53.365 Virtualization Management (1Ch): Supported 00:08:53.365 Doorbell Buffer Config (7Ch): Supported 00:08:53.365 Format NVM (80h): Supported LBA-Change 00:08:53.365 I/O Commands 00:08:53.365 ------------ 00:08:53.365 Flush (00h): Supported LBA-Change 00:08:53.365 Write (01h): Supported LBA-Change 00:08:53.365 Read (02h): Supported 00:08:53.365 Compare (05h): Supported 00:08:53.365 Write Zeroes (08h): Supported LBA-Change 00:08:53.365 Dataset Management (09h): Supported LBA-Change 00:08:53.365 Unknown (0Ch): Supported 00:08:53.365 Unknown (12h): Supported 00:08:53.365 Copy (19h): Supported LBA-Change 00:08:53.365 Unknown (1Dh): Supported LBA-Change 00:08:53.365 00:08:53.365 Error Log 00:08:53.365 ========= 00:08:53.365 00:08:53.365 Arbitration 00:08:53.365 =========== 00:08:53.365 Arbitration Burst: no limit 00:08:53.365 00:08:53.365 Power Management 00:08:53.365 ================ 00:08:53.365 Number of Power States: 1 00:08:53.365 Current Power State: Power State #0 00:08:53.365 Power State #0: 00:08:53.365 Max Power: 25.00 W 00:08:53.365 Non-Operational State: Operational 00:08:53.365 Entry Latency: 16 microseconds 00:08:53.365 Exit Latency: 4 microseconds 00:08:53.365 Relative Read Throughput: 0 00:08:53.365 Relative Read Latency: 0 00:08:53.365 Relative Write Throughput: 0 00:08:53.365 Relative Write Latency: 0 00:08:53.365 Idle Power: Not Reported 00:08:53.365 Active Power: Not Reported 00:08:53.365 Non-Operational Permissive Mode: Not Supported 00:08:53.365 00:08:53.365 Health Information 00:08:53.365 ================== 00:08:53.365 Critical Warnings: 00:08:53.365 Available Spare Space: OK 00:08:53.365 Temperature: [2024-11-15 10:52:40.031325] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64126 terminated unexpected 00:08:53.365 OK 00:08:53.365 Device Reliability: OK 00:08:53.365 Read Only: No 00:08:53.365 Volatile Memory Backup: OK 00:08:53.365 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.365 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.365 Available Spare: 0% 00:08:53.365 Available Spare Threshold: 0% 00:08:53.365 Life Percentage Used: 0% 00:08:53.365 Data Units Read: 1241 00:08:53.365 Data Units Written: 1102 00:08:53.365 Host Read Commands: 57438 00:08:53.365 Host Write Commands: 56137 00:08:53.365 Controller Busy Time: 0 minutes 00:08:53.365 Power Cycles: 0 00:08:53.365 Power On Hours: 0 hours 00:08:53.365 Unsafe Shutdowns: 0 00:08:53.365 Unrecoverable Media Errors: 0 00:08:53.365 Lifetime Error Log Entries: 0 00:08:53.365 Warning Temperature Time: 0 minutes 00:08:53.365 Critical Temperature Time: 0 minutes 00:08:53.365 00:08:53.365 Number of Queues 00:08:53.365 ================ 00:08:53.365 Number of I/O Submission Queues: 64 00:08:53.365 Number of I/O Completion Queues: 64 00:08:53.365 00:08:53.365 ZNS Specific Controller Data 00:08:53.365 ============================ 00:08:53.365 Zone Append Size Limit: 0 00:08:53.365 00:08:53.365 00:08:53.365 Active Namespaces 00:08:53.365 ================= 00:08:53.365 Namespace ID:1 00:08:53.365 Error Recovery Timeout: Unlimited 00:08:53.365 Command Set Identifier: NVM (00h) 00:08:53.365 Deallocate: Supported 00:08:53.365 Deallocated/Unwritten Error: Supported 00:08:53.365 Deallocated Read Value: All 0x00 00:08:53.365 Deallocate in Write Zeroes: Not Supported 00:08:53.365 Deallocated Guard Field: 0xFFFF 00:08:53.365 Flush: Supported 00:08:53.365 Reservation: Not Supported 00:08:53.365 Namespace Sharing Capabilities: Private 00:08:53.365 Size (in LBAs): 1310720 (5GiB) 00:08:53.365 Capacity (in LBAs): 1310720 (5GiB) 00:08:53.366 Utilization (in LBAs): 1310720 (5GiB) 00:08:53.366 Thin Provisioning: Not Supported 00:08:53.366 Per-NS Atomic Units: No 00:08:53.366 Maximum Single Source Range Length: 128 00:08:53.366 Maximum Copy Length: 128 00:08:53.366 Maximum Source Range Count: 128 00:08:53.366 NGUID/EUI64 Never Reused: No 00:08:53.366 Namespace Write Protected: No 00:08:53.366 Number of LBA Formats: 8 00:08:53.366 Current LBA Format: LBA Format #04 00:08:53.366 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.366 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.366 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.366 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.366 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.366 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.366 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.366 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.366 00:08:53.366 NVM Specific Namespace Data 00:08:53.366 =========================== 00:08:53.366 Logical Block Storage Tag Mask: 0 00:08:53.366 Protection Information Capabilities: 00:08:53.366 16b Guard Protection Information Storage Tag Support: No 00:08:53.366 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.366 Storage Tag Check Read Support: No 00:08:53.366 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.366 ===================================================== 00:08:53.366 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:53.366 ===================================================== 00:08:53.366 Controller Capabilities/Features 00:08:53.366 ================================ 00:08:53.366 Vendor ID: 1b36 00:08:53.366 Subsystem Vendor ID: 1af4 00:08:53.366 Serial Number: 12343 00:08:53.366 Model Number: QEMU NVMe Ctrl 00:08:53.366 Firmware Version: 8.0.0 00:08:53.366 Recommended Arb Burst: 6 00:08:53.366 IEEE OUI Identifier: 00 54 52 00:08:53.366 Multi-path I/O 00:08:53.366 May have multiple subsystem ports: No 00:08:53.366 May have multiple controllers: Yes 00:08:53.366 Associated with SR-IOV VF: No 00:08:53.366 Max Data Transfer Size: 524288 00:08:53.366 Max Number of Namespaces: 256 00:08:53.366 Max Number of I/O Queues: 64 00:08:53.366 NVMe Specification Version (VS): 1.4 00:08:53.366 NVMe Specification Version (Identify): 1.4 00:08:53.366 Maximum Queue Entries: 2048 00:08:53.366 Contiguous Queues Required: Yes 00:08:53.366 Arbitration Mechanisms Supported 00:08:53.366 Weighted Round Robin: Not Supported 00:08:53.366 Vendor Specific: Not Supported 00:08:53.366 Reset Timeout: 7500 ms 00:08:53.366 Doorbell Stride: 4 bytes 00:08:53.366 NVM Subsystem Reset: Not Supported 00:08:53.366 Command Sets Supported 00:08:53.366 NVM Command Set: Supported 00:08:53.366 Boot Partition: Not Supported 00:08:53.366 Memory Page Size Minimum: 4096 bytes 00:08:53.366 Memory Page Size Maximum: 65536 bytes 00:08:53.366 Persistent Memory Region: Not Supported 00:08:53.366 Optional Asynchronous Events Supported 00:08:53.366 Namespace Attribute Notices: Supported 00:08:53.366 Firmware Activation Notices: Not Supported 00:08:53.366 ANA Change Notices: Not Supported 00:08:53.366 PLE Aggregate Log Change Notices: Not Supported 00:08:53.366 LBA Status Info Alert Notices: Not Supported 00:08:53.366 EGE Aggregate Log Change Notices: Not Supported 00:08:53.366 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.366 Zone Descriptor Change Notices: Not Supported 00:08:53.366 Discovery Log Change Notices: Not Supported 00:08:53.366 Controller Attributes 00:08:53.366 128-bit Host Identifier: Not Supported 00:08:53.366 Non-Operational Permissive Mode: Not Supported 00:08:53.366 NVM Sets: Not Supported 00:08:53.366 Read Recovery Levels: Not Supported 00:08:53.366 Endurance Groups: Supported 00:08:53.366 Predictable Latency Mode: Not Supported 00:08:53.366 Traffic Based Keep ALive: Not Supported 00:08:53.366 Namespace Granularity: Not Supported 00:08:53.366 SQ Associations: Not Supported 00:08:53.366 UUID List: Not Supported 00:08:53.366 Multi-Domain Subsystem: Not Supported 00:08:53.366 Fixed Capacity Management: Not Supported 00:08:53.366 Variable Capacity Management: Not Supported 00:08:53.366 Delete Endurance Group: Not Supported 00:08:53.366 Delete NVM Set: Not Supported 00:08:53.366 Extended LBA Formats Supported: Supported 00:08:53.366 Flexible Data Placement Supported: Supported 00:08:53.366 00:08:53.366 Controller Memory Buffer Support 00:08:53.366 ================================ 00:08:53.366 Supported: No 00:08:53.366 00:08:53.366 Persistent Memory Region Support 00:08:53.366 ================================ 00:08:53.366 Supported: No 00:08:53.366 00:08:53.366 Admin Command Set Attributes 00:08:53.366 ============================ 00:08:53.366 Security Send/Receive: Not Supported 00:08:53.366 Format NVM: Supported 00:08:53.366 Firmware Activate/Download: Not Supported 00:08:53.366 Namespace Management: Supported 00:08:53.366 Device Self-Test: Not Supported 00:08:53.366 Directives: Supported 00:08:53.366 NVMe-MI: Not Supported 00:08:53.366 Virtualization Management: Not Supported 00:08:53.366 Doorbell Buffer Config: Supported 00:08:53.366 Get LBA Status Capability: Not Supported 00:08:53.366 Command & Feature Lockdown Capability: Not Supported 00:08:53.366 Abort Command Limit: 4 00:08:53.366 Async Event Request Limit: 4 00:08:53.366 Number of Firmware Slots: N/A 00:08:53.366 Firmware Slot 1 Read-Only: N/A 00:08:53.366 Firmware Activation Without Reset: N/A 00:08:53.366 Multiple Update Detection Support: N/A 00:08:53.366 Firmware Update Granularity: No Information Provided 00:08:53.366 Per-Namespace SMART Log: Yes 00:08:53.366 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.366 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:53.366 Command Effects Log Page: Supported 00:08:53.366 Get Log Page Extended Data: Supported 00:08:53.366 Telemetry Log Pages: Not Supported 00:08:53.366 Persistent Event Log Pages: Not Supported 00:08:53.366 Supported Log Pages Log Page: May Support 00:08:53.366 Commands Supported & Effects Log Page: Not Supported 00:08:53.366 Feature Identifiers & Effects Log Page:May Support 00:08:53.366 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.366 Data Area 4 for Telemetry Log: Not Supported 00:08:53.366 Error Log Page Entries Supported: 1 00:08:53.366 Keep Alive: Not Supported 00:08:53.366 00:08:53.366 NVM Command Set Attributes 00:08:53.366 ========================== 00:08:53.366 Submission Queue Entry Size 00:08:53.366 Max: 64 00:08:53.367 Min: 64 00:08:53.367 Completion Queue Entry Size 00:08:53.367 Max: 16 00:08:53.367 Min: 16 00:08:53.367 Number of Namespaces: 256 00:08:53.367 Compare Command: Supported 00:08:53.367 Write Uncorrectable Command: Not Supported 00:08:53.367 Dataset Management Command: Supported 00:08:53.367 Write Zeroes Command: Supported 00:08:53.367 Set Features Save Field: Supported 00:08:53.367 Reservations: Not Supported 00:08:53.367 Timestamp: Supported 00:08:53.367 Copy: Supported 00:08:53.367 Volatile Write Cache: Present 00:08:53.367 Atomic Write Unit (Normal): 1 00:08:53.367 Atomic Write Unit (PFail): 1 00:08:53.367 Atomic Compare & Write Unit: 1 00:08:53.367 Fused Compare & Write: Not Supported 00:08:53.367 Scatter-Gather List 00:08:53.367 SGL Command Set: Supported 00:08:53.367 SGL Keyed: Not Supported 00:08:53.367 SGL Bit Bucket Descriptor: Not Supported 00:08:53.367 SGL Metadata Pointer: Not Supported 00:08:53.367 Oversized SGL: Not Supported 00:08:53.367 SGL Metadata Address: Not Supported 00:08:53.367 SGL Offset: Not Supported 00:08:53.367 Transport SGL Data Block: Not Supported 00:08:53.367 Replay Protected Memory Block: Not Supported 00:08:53.367 00:08:53.367 Firmware Slot Information 00:08:53.367 ========================= 00:08:53.367 Active slot: 1 00:08:53.367 Slot 1 Firmware Revision: 1.0 00:08:53.367 00:08:53.367 00:08:53.367 Commands Supported and Effects 00:08:53.367 ============================== 00:08:53.367 Admin Commands 00:08:53.367 -------------- 00:08:53.367 Delete I/O Submission Queue (00h): Supported 00:08:53.367 Create I/O Submission Queue (01h): Supported 00:08:53.367 Get Log Page (02h): Supported 00:08:53.367 Delete I/O Completion Queue (04h): Supported 00:08:53.367 Create I/O Completion Queue (05h): Supported 00:08:53.367 Identify (06h): Supported 00:08:53.367 Abort (08h): Supported 00:08:53.367 Set Features (09h): Supported 00:08:53.367 Get Features (0Ah): Supported 00:08:53.367 Asynchronous Event Request (0Ch): Supported 00:08:53.367 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.367 Directive Send (19h): Supported 00:08:53.367 Directive Receive (1Ah): Supported 00:08:53.367 Virtualization Management (1Ch): Supported 00:08:53.367 Doorbell Buffer Config (7Ch): Supported 00:08:53.367 Format NVM (80h): Supported LBA-Change 00:08:53.367 I/O Commands 00:08:53.367 ------------ 00:08:53.367 Flush (00h): Supported LBA-Change 00:08:53.367 Write (01h): Supported LBA-Change 00:08:53.367 Read (02h): Supported 00:08:53.367 Compare (05h): Supported 00:08:53.367 Write Zeroes (08h): Supported LBA-Change 00:08:53.367 Dataset Management (09h): Supported LBA-Change 00:08:53.367 Unknown (0Ch): Supported 00:08:53.367 Unknown (12h): Supported 00:08:53.367 Copy (19h): Supported LBA-Change 00:08:53.367 Unknown (1Dh): Supported LBA-Change 00:08:53.367 00:08:53.367 Error Log 00:08:53.367 ========= 00:08:53.367 00:08:53.367 Arbitration 00:08:53.367 =========== 00:08:53.367 Arbitration Burst: no limit 00:08:53.367 00:08:53.367 Power Management 00:08:53.367 ================ 00:08:53.367 Number of Power States: 1 00:08:53.367 Current Power State: Power State #0 00:08:53.367 Power State #0: 00:08:53.367 Max Power: 25.00 W 00:08:53.367 Non-Operational State: Operational 00:08:53.367 Entry Latency: 16 microseconds 00:08:53.367 Exit Latency: 4 microseconds 00:08:53.367 Relative Read Throughput: 0 00:08:53.367 Relative Read Latency: 0 00:08:53.367 Relative Write Throughput: 0 00:08:53.367 Relative Write Latency: 0 00:08:53.367 Idle Power: Not Reported 00:08:53.367 Active Power: Not Reported 00:08:53.367 Non-Operational Permissive Mode: Not Supported 00:08:53.367 00:08:53.367 Health Information 00:08:53.367 ================== 00:08:53.367 Critical Warnings: 00:08:53.367 Available Spare Space: OK 00:08:53.367 Temperature: OK 00:08:53.367 Device Reliability: OK 00:08:53.367 Read Only: No 00:08:53.367 Volatile Memory Backup: OK 00:08:53.367 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.367 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.367 Available Spare: 0% 00:08:53.367 Available Spare Threshold: 0% 00:08:53.367 Life Percentage Used: 0% 00:08:53.367 Data Units Read: 918 00:08:53.367 Data Units Written: 847 00:08:53.367 Host Read Commands: 40192 00:08:53.367 Host Write Commands: 39615 00:08:53.367 Controller Busy Time: 0 minutes 00:08:53.367 Power Cycles: 0 00:08:53.367 Power On Hours: 0 hours 00:08:53.367 Unsafe Shutdowns: 0 00:08:53.367 Unrecoverable Media Errors: 0 00:08:53.367 Lifetime Error Log Entries: 0 00:08:53.367 Warning Temperature Time: 0 minutes 00:08:53.367 Critical Temperature Time: 0 minutes 00:08:53.367 00:08:53.367 Number of Queues 00:08:53.367 ================ 00:08:53.367 Number of I/O Submission Queues: 64 00:08:53.367 Number of I/O Completion Queues: 64 00:08:53.367 00:08:53.367 ZNS Specific Controller Data 00:08:53.367 ============================ 00:08:53.367 Zone Append Size Limit: 0 00:08:53.367 00:08:53.367 00:08:53.367 Active Namespaces 00:08:53.367 ================= 00:08:53.367 Namespace ID:1 00:08:53.367 Error Recovery Timeout: Unlimited 00:08:53.367 Command Set Identifier: NVM (00h) 00:08:53.367 Deallocate: Supported 00:08:53.367 Deallocated/Unwritten Error: Supported 00:08:53.367 Deallocated Read Value: All 0x00 00:08:53.367 Deallocate in Write Zeroes: Not Supported 00:08:53.367 Deallocated Guard Field: 0xFFFF 00:08:53.367 Flush: Supported 00:08:53.367 Reservation: Not Supported 00:08:53.367 Namespace Sharing Capabilities: Multiple Controllers 00:08:53.367 Size (in LBAs): 262144 (1GiB) 00:08:53.367 Capacity (in LBAs): 262144 (1GiB) 00:08:53.367 Utilization (in LBAs): 262144 (1GiB) 00:08:53.367 Thin Provisioning: Not Supported 00:08:53.367 Per-NS Atomic Units: No 00:08:53.367 Maximum Single Source Range Length: 128 00:08:53.367 Maximum Copy Length: 128 00:08:53.367 Maximum Source Range Count: 128 00:08:53.367 NGUID/EUI64 Never Reused: No 00:08:53.367 Namespace Write Protected: No 00:08:53.367 Endurance group ID: 1 00:08:53.367 Number of LBA Formats: 8 00:08:53.367 Current LBA Format: LBA Format #04 00:08:53.367 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.367 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.367 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.367 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.367 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.367 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.367 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.367 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.367 00:08:53.367 Get Feature FDP: 00:08:53.367 ================ 00:08:53.367 Enabled: Yes 00:08:53.367 FDP configuration index: 0 00:08:53.367 00:08:53.367 FDP configurations log page 00:08:53.367 =========================== 00:08:53.367 Number of FDP configurations: 1 00:08:53.367 Version: 0 00:08:53.367 Size: 112 00:08:53.368 FDP Configuration Descriptor: 0 00:08:53.368 Descriptor Size: 96 00:08:53.368 Reclaim Group Identifier format: 2 00:08:53.368 FDP Volatile Write Cache: Not Present 00:08:53.368 FDP Configuration: Valid 00:08:53.368 Vendor Specific Size: 0 00:08:53.368 Number of Reclaim Groups: 2 00:08:53.368 Number of Recalim Unit Handles: 8 00:08:53.368 Max Placement Identifiers: 128 00:08:53.368 Number of Namespaces Suppprted: 256 00:08:53.368 Reclaim unit Nominal Size: 6000000 bytes 00:08:53.368 Estimated Reclaim Unit Time Limit: Not Reported 00:08:53.368 RUH Desc #000: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #001: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #002: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #003: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #004: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #005: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #006: RUH Type: Initially Isolated 00:08:53.368 RUH Desc #007: RUH Type: Initially Isolated 00:08:53.368 00:08:53.368 FDP reclaim unit handle usage log page 00:08:53.368 ====================================== 00:08:53.368 Number of Reclaim Unit Handles: 8 00:08:53.368 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:53.368 RUH Usage Desc #001: RUH Attributes: Unused 00:08:53.368 RUH Usage Desc #002: RUH Attributes: Unused 00:08:53.368 RUH Usage Desc #003: RUH Attributes: Unused 00:08:53.368 RUH Usage Desc #004: RUH Attributes: Unused 00:08:53.368 RUH Usage Desc #005: RUH Attributes: Unused 00:08:53.368 RUH Usage Desc #006: RUH Attributes: Unused 00:08:53.368 RUH Usage Desc #007: RUH Attributes: Unused 00:08:53.368 00:08:53.368 FDP statistics log page 00:08:53.368 ======================= 00:08:53.368 Host bytes with metadata written: 546086912 00:08:53.368 Med[2024-11-15 10:52:40.033077] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64126 terminated unexpected 00:08:53.368 ia bytes with metadata written: 546164736 00:08:53.368 Media bytes erased: 0 00:08:53.368 00:08:53.368 FDP events log page 00:08:53.368 =================== 00:08:53.368 Number of FDP events: 0 00:08:53.368 00:08:53.368 NVM Specific Namespace Data 00:08:53.368 =========================== 00:08:53.368 Logical Block Storage Tag Mask: 0 00:08:53.368 Protection Information Capabilities: 00:08:53.368 16b Guard Protection Information Storage Tag Support: No 00:08:53.368 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.368 Storage Tag Check Read Support: No 00:08:53.368 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.368 ===================================================== 00:08:53.368 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:53.368 ===================================================== 00:08:53.368 Controller Capabilities/Features 00:08:53.368 ================================ 00:08:53.368 Vendor ID: 1b36 00:08:53.368 Subsystem Vendor ID: 1af4 00:08:53.368 Serial Number: 12342 00:08:53.368 Model Number: QEMU NVMe Ctrl 00:08:53.368 Firmware Version: 8.0.0 00:08:53.368 Recommended Arb Burst: 6 00:08:53.368 IEEE OUI Identifier: 00 54 52 00:08:53.368 Multi-path I/O 00:08:53.368 May have multiple subsystem ports: No 00:08:53.368 May have multiple controllers: No 00:08:53.368 Associated with SR-IOV VF: No 00:08:53.368 Max Data Transfer Size: 524288 00:08:53.368 Max Number of Namespaces: 256 00:08:53.368 Max Number of I/O Queues: 64 00:08:53.368 NVMe Specification Version (VS): 1.4 00:08:53.368 NVMe Specification Version (Identify): 1.4 00:08:53.368 Maximum Queue Entries: 2048 00:08:53.368 Contiguous Queues Required: Yes 00:08:53.368 Arbitration Mechanisms Supported 00:08:53.368 Weighted Round Robin: Not Supported 00:08:53.368 Vendor Specific: Not Supported 00:08:53.368 Reset Timeout: 7500 ms 00:08:53.368 Doorbell Stride: 4 bytes 00:08:53.368 NVM Subsystem Reset: Not Supported 00:08:53.368 Command Sets Supported 00:08:53.368 NVM Command Set: Supported 00:08:53.368 Boot Partition: Not Supported 00:08:53.368 Memory Page Size Minimum: 4096 bytes 00:08:53.368 Memory Page Size Maximum: 65536 bytes 00:08:53.368 Persistent Memory Region: Not Supported 00:08:53.368 Optional Asynchronous Events Supported 00:08:53.368 Namespace Attribute Notices: Supported 00:08:53.368 Firmware Activation Notices: Not Supported 00:08:53.368 ANA Change Notices: Not Supported 00:08:53.368 PLE Aggregate Log Change Notices: Not Supported 00:08:53.368 LBA Status Info Alert Notices: Not Supported 00:08:53.368 EGE Aggregate Log Change Notices: Not Supported 00:08:53.368 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.368 Zone Descriptor Change Notices: Not Supported 00:08:53.368 Discovery Log Change Notices: Not Supported 00:08:53.368 Controller Attributes 00:08:53.368 128-bit Host Identifier: Not Supported 00:08:53.368 Non-Operational Permissive Mode: Not Supported 00:08:53.368 NVM Sets: Not Supported 00:08:53.368 Read Recovery Levels: Not Supported 00:08:53.368 Endurance Groups: Not Supported 00:08:53.368 Predictable Latency Mode: Not Supported 00:08:53.368 Traffic Based Keep ALive: Not Supported 00:08:53.368 Namespace Granularity: Not Supported 00:08:53.368 SQ Associations: Not Supported 00:08:53.368 UUID List: Not Supported 00:08:53.368 Multi-Domain Subsystem: Not Supported 00:08:53.368 Fixed Capacity Management: Not Supported 00:08:53.368 Variable Capacity Management: Not Supported 00:08:53.369 Delete Endurance Group: Not Supported 00:08:53.369 Delete NVM Set: Not Supported 00:08:53.369 Extended LBA Formats Supported: Supported 00:08:53.369 Flexible Data Placement Supported: Not Supported 00:08:53.369 00:08:53.369 Controller Memory Buffer Support 00:08:53.369 ================================ 00:08:53.369 Supported: No 00:08:53.369 00:08:53.369 Persistent Memory Region Support 00:08:53.369 ================================ 00:08:53.369 Supported: No 00:08:53.369 00:08:53.369 Admin Command Set Attributes 00:08:53.369 ============================ 00:08:53.369 Security Send/Receive: Not Supported 00:08:53.369 Format NVM: Supported 00:08:53.369 Firmware Activate/Download: Not Supported 00:08:53.369 Namespace Management: Supported 00:08:53.369 Device Self-Test: Not Supported 00:08:53.369 Directives: Supported 00:08:53.369 NVMe-MI: Not Supported 00:08:53.369 Virtualization Management: Not Supported 00:08:53.369 Doorbell Buffer Config: Supported 00:08:53.369 Get LBA Status Capability: Not Supported 00:08:53.369 Command & Feature Lockdown Capability: Not Supported 00:08:53.369 Abort Command Limit: 4 00:08:53.369 Async Event Request Limit: 4 00:08:53.369 Number of Firmware Slots: N/A 00:08:53.369 Firmware Slot 1 Read-Only: N/A 00:08:53.369 Firmware Activation Without Reset: N/A 00:08:53.369 Multiple Update Detection Support: N/A 00:08:53.369 Firmware Update Granularity: No Information Provided 00:08:53.369 Per-Namespace SMART Log: Yes 00:08:53.369 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.369 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:53.369 Command Effects Log Page: Supported 00:08:53.369 Get Log Page Extended Data: Supported 00:08:53.369 Telemetry Log Pages: Not Supported 00:08:53.369 Persistent Event Log Pages: Not Supported 00:08:53.369 Supported Log Pages Log Page: May Support 00:08:53.369 Commands Supported & Effects Log Page: Not Supported 00:08:53.369 Feature Identifiers & Effects Log Page:May Support 00:08:53.369 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.369 Data Area 4 for Telemetry Log: Not Supported 00:08:53.369 Error Log Page Entries Supported: 1 00:08:53.369 Keep Alive: Not Supported 00:08:53.369 00:08:53.369 NVM Command Set Attributes 00:08:53.369 ========================== 00:08:53.369 Submission Queue Entry Size 00:08:53.369 Max: 64 00:08:53.369 Min: 64 00:08:53.369 Completion Queue Entry Size 00:08:53.369 Max: 16 00:08:53.369 Min: 16 00:08:53.369 Number of Namespaces: 256 00:08:53.369 Compare Command: Supported 00:08:53.369 Write Uncorrectable Command: Not Supported 00:08:53.369 Dataset Management Command: Supported 00:08:53.369 Write Zeroes Command: Supported 00:08:53.369 Set Features Save Field: Supported 00:08:53.369 Reservations: Not Supported 00:08:53.369 Timestamp: Supported 00:08:53.369 Copy: Supported 00:08:53.369 Volatile Write Cache: Present 00:08:53.369 Atomic Write Unit (Normal): 1 00:08:53.369 Atomic Write Unit (PFail): 1 00:08:53.369 Atomic Compare & Write Unit: 1 00:08:53.369 Fused Compare & Write: Not Supported 00:08:53.369 Scatter-Gather List 00:08:53.369 SGL Command Set: Supported 00:08:53.369 SGL Keyed: Not Supported 00:08:53.369 SGL Bit Bucket Descriptor: Not Supported 00:08:53.369 SGL Metadata Pointer: Not Supported 00:08:53.369 Oversized SGL: Not Supported 00:08:53.369 SGL Metadata Address: Not Supported 00:08:53.369 SGL Offset: Not Supported 00:08:53.369 Transport SGL Data Block: Not Supported 00:08:53.369 Replay Protected Memory Block: Not Supported 00:08:53.369 00:08:53.369 Firmware Slot Information 00:08:53.369 ========================= 00:08:53.369 Active slot: 1 00:08:53.369 Slot 1 Firmware Revision: 1.0 00:08:53.369 00:08:53.369 00:08:53.369 Commands Supported and Effects 00:08:53.369 ============================== 00:08:53.369 Admin Commands 00:08:53.369 -------------- 00:08:53.369 Delete I/O Submission Queue (00h): Supported 00:08:53.369 Create I/O Submission Queue (01h): Supported 00:08:53.369 Get Log Page (02h): Supported 00:08:53.369 Delete I/O Completion Queue (04h): Supported 00:08:53.369 Create I/O Completion Queue (05h): Supported 00:08:53.369 Identify (06h): Supported 00:08:53.369 Abort (08h): Supported 00:08:53.369 Set Features (09h): Supported 00:08:53.369 Get Features (0Ah): Supported 00:08:53.369 Asynchronous Event Request (0Ch): Supported 00:08:53.369 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.369 Directive Send (19h): Supported 00:08:53.369 Directive Receive (1Ah): Supported 00:08:53.369 Virtualization Management (1Ch): Supported 00:08:53.369 Doorbell Buffer Config (7Ch): Supported 00:08:53.369 Format NVM (80h): Supported LBA-Change 00:08:53.369 I/O Commands 00:08:53.369 ------------ 00:08:53.369 Flush (00h): Supported LBA-Change 00:08:53.369 Write (01h): Supported LBA-Change 00:08:53.369 Read (02h): Supported 00:08:53.369 Compare (05h): Supported 00:08:53.369 Write Zeroes (08h): Supported LBA-Change 00:08:53.369 Dataset Management (09h): Supported LBA-Change 00:08:53.369 Unknown (0Ch): Supported 00:08:53.369 Unknown (12h): Supported 00:08:53.369 Copy (19h): Supported LBA-Change 00:08:53.369 Unknown (1Dh): Supported LBA-Change 00:08:53.369 00:08:53.369 Error Log 00:08:53.369 ========= 00:08:53.369 00:08:53.369 Arbitration 00:08:53.369 =========== 00:08:53.369 Arbitration Burst: no limit 00:08:53.369 00:08:53.369 Power Management 00:08:53.369 ================ 00:08:53.369 Number of Power States: 1 00:08:53.369 Current Power State: Power State #0 00:08:53.369 Power State #0: 00:08:53.369 Max Power: 25.00 W 00:08:53.369 Non-Operational State: Operational 00:08:53.369 Entry Latency: 16 microseconds 00:08:53.369 Exit Latency: 4 microseconds 00:08:53.369 Relative Read Throughput: 0 00:08:53.369 Relative Read Latency: 0 00:08:53.369 Relative Write Throughput: 0 00:08:53.369 Relative Write Latency: 0 00:08:53.369 Idle Power: Not Reported 00:08:53.369 Active Power: Not Reported 00:08:53.369 Non-Operational Permissive Mode: Not Supported 00:08:53.369 00:08:53.369 Health Information 00:08:53.369 ================== 00:08:53.369 Critical Warnings: 00:08:53.369 Available Spare Space: OK 00:08:53.369 Temperature: OK 00:08:53.369 Device Reliability: OK 00:08:53.369 Read Only: No 00:08:53.369 Volatile Memory Backup: OK 00:08:53.369 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.369 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.369 Available Spare: 0% 00:08:53.369 Available Spare Threshold: 0% 00:08:53.369 Life Percentage Used: 0% 00:08:53.369 Data Units Read: 2531 00:08:53.369 Data Units Written: 2318 00:08:53.369 Host Read Commands: 118834 00:08:53.369 Host Write Commands: 117103 00:08:53.369 Controller Busy Time: 0 minutes 00:08:53.369 Power Cycles: 0 00:08:53.369 Power On Hours: 0 hours 00:08:53.369 Unsafe Shutdowns: 0 00:08:53.369 Unrecoverable Media Errors: 0 00:08:53.369 Lifetime Error Log Entries: 0 00:08:53.369 Warning Temperature Time: 0 minutes 00:08:53.369 Critical Temperature Time: 0 minutes 00:08:53.369 00:08:53.369 Number of Queues 00:08:53.369 ================ 00:08:53.369 Number of I/O Submission Queues: 64 00:08:53.369 Number of I/O Completion Queues: 64 00:08:53.369 00:08:53.369 ZNS Specific Controller Data 00:08:53.369 ============================ 00:08:53.369 Zone Append Size Limit: 0 00:08:53.369 00:08:53.369 00:08:53.369 Active Namespaces 00:08:53.369 ================= 00:08:53.369 Namespace ID:1 00:08:53.369 Error Recovery Timeout: Unlimited 00:08:53.369 Command Set Identifier: NVM (00h) 00:08:53.369 Deallocate: Supported 00:08:53.369 Deallocated/Unwritten Error: Supported 00:08:53.369 Deallocated Read Value: All 0x00 00:08:53.369 Deallocate in Write Zeroes: Not Supported 00:08:53.369 Deallocated Guard Field: 0xFFFF 00:08:53.369 Flush: Supported 00:08:53.369 Reservation: Not Supported 00:08:53.369 Namespace Sharing Capabilities: Private 00:08:53.369 Size (in LBAs): 1048576 (4GiB) 00:08:53.369 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.369 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.369 Thin Provisioning: Not Supported 00:08:53.369 Per-NS Atomic Units: No 00:08:53.369 Maximum Single Source Range Length: 128 00:08:53.369 Maximum Copy Length: 128 00:08:53.370 Maximum Source Range Count: 128 00:08:53.370 NGUID/EUI64 Never Reused: No 00:08:53.370 Namespace Write Protected: No 00:08:53.370 Number of LBA Formats: 8 00:08:53.370 Current LBA Format: LBA Format #04 00:08:53.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.370 00:08:53.370 NVM Specific Namespace Data 00:08:53.370 =========================== 00:08:53.370 Logical Block Storage Tag Mask: 0 00:08:53.370 Protection Information Capabilities: 00:08:53.370 16b Guard Protection Information Storage Tag Support: No 00:08:53.370 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.370 Storage Tag Check Read Support: No 00:08:53.370 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Namespace ID:2 00:08:53.370 Error Recovery Timeout: Unlimited 00:08:53.370 Command Set Identifier: NVM (00h) 00:08:53.370 Deallocate: Supported 00:08:53.370 Deallocated/Unwritten Error: Supported 00:08:53.370 Deallocated Read Value: All 0x00 00:08:53.370 Deallocate in Write Zeroes: Not Supported 00:08:53.370 Deallocated Guard Field: 0xFFFF 00:08:53.370 Flush: Supported 00:08:53.370 Reservation: Not Supported 00:08:53.370 Namespace Sharing Capabilities: Private 00:08:53.370 Size (in LBAs): 1048576 (4GiB) 00:08:53.370 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.370 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.370 Thin Provisioning: Not Supported 00:08:53.370 Per-NS Atomic Units: No 00:08:53.370 Maximum Single Source Range Length: 128 00:08:53.370 Maximum Copy Length: 128 00:08:53.370 Maximum Source Range Count: 128 00:08:53.370 NGUID/EUI64 Never Reused: No 00:08:53.370 Namespace Write Protected: No 00:08:53.370 Number of LBA Formats: 8 00:08:53.370 Current LBA Format: LBA Format #04 00:08:53.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.370 00:08:53.370 NVM Specific Namespace Data 00:08:53.370 =========================== 00:08:53.370 Logical Block Storage Tag Mask: 0 00:08:53.370 Protection Information Capabilities: 00:08:53.370 16b Guard Protection Information Storage Tag Support: No 00:08:53.370 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.370 Storage Tag Check Read Support: No 00:08:53.370 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Namespace ID:3 00:08:53.370 Error Recovery Timeout: Unlimited 00:08:53.370 Command Set Identifier: NVM (00h) 00:08:53.370 Deallocate: Supported 00:08:53.370 Deallocated/Unwritten Error: Supported 00:08:53.370 Deallocated Read Value: All 0x00 00:08:53.370 Deallocate in Write Zeroes: Not Supported 00:08:53.370 Deallocated Guard Field: 0xFFFF 00:08:53.370 Flush: Supported 00:08:53.370 Reservation: Not Supported 00:08:53.370 Namespace Sharing Capabilities: Private 00:08:53.370 Size (in LBAs): 1048576 (4GiB) 00:08:53.370 Capacity (in LBAs): 1048576 (4GiB) 00:08:53.370 Utilization (in LBAs): 1048576 (4GiB) 00:08:53.370 Thin Provisioning: Not Supported 00:08:53.370 Per-NS Atomic Units: No 00:08:53.370 Maximum Single Source Range Length: 128 00:08:53.370 Maximum Copy Length: 128 00:08:53.370 Maximum Source Range Count: 128 00:08:53.370 NGUID/EUI64 Never Reused: No 00:08:53.370 Namespace Write Protected: No 00:08:53.370 Number of LBA Formats: 8 00:08:53.370 Current LBA Format: LBA Format #04 00:08:53.370 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.370 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.370 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.370 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.370 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.370 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.370 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.370 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.370 00:08:53.370 NVM Specific Namespace Data 00:08:53.370 =========================== 00:08:53.370 Logical Block Storage Tag Mask: 0 00:08:53.370 Protection Information Capabilities: 00:08:53.370 16b Guard Protection Information Storage Tag Support: No 00:08:53.370 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.370 Storage Tag Check Read Support: No 00:08:53.370 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.370 10:52:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.370 10:52:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:53.631 ===================================================== 00:08:53.631 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:53.631 ===================================================== 00:08:53.631 Controller Capabilities/Features 00:08:53.631 ================================ 00:08:53.631 Vendor ID: 1b36 00:08:53.631 Subsystem Vendor ID: 1af4 00:08:53.631 Serial Number: 12340 00:08:53.631 Model Number: QEMU NVMe Ctrl 00:08:53.631 Firmware Version: 8.0.0 00:08:53.631 Recommended Arb Burst: 6 00:08:53.631 IEEE OUI Identifier: 00 54 52 00:08:53.631 Multi-path I/O 00:08:53.631 May have multiple subsystem ports: No 00:08:53.631 May have multiple controllers: No 00:08:53.631 Associated with SR-IOV VF: No 00:08:53.631 Max Data Transfer Size: 524288 00:08:53.631 Max Number of Namespaces: 256 00:08:53.631 Max Number of I/O Queues: 64 00:08:53.631 NVMe Specification Version (VS): 1.4 00:08:53.631 NVMe Specification Version (Identify): 1.4 00:08:53.631 Maximum Queue Entries: 2048 00:08:53.631 Contiguous Queues Required: Yes 00:08:53.631 Arbitration Mechanisms Supported 00:08:53.631 Weighted Round Robin: Not Supported 00:08:53.631 Vendor Specific: Not Supported 00:08:53.631 Reset Timeout: 7500 ms 00:08:53.631 Doorbell Stride: 4 bytes 00:08:53.631 NVM Subsystem Reset: Not Supported 00:08:53.631 Command Sets Supported 00:08:53.631 NVM Command Set: Supported 00:08:53.631 Boot Partition: Not Supported 00:08:53.631 Memory Page Size Minimum: 4096 bytes 00:08:53.631 Memory Page Size Maximum: 65536 bytes 00:08:53.631 Persistent Memory Region: Not Supported 00:08:53.631 Optional Asynchronous Events Supported 00:08:53.631 Namespace Attribute Notices: Supported 00:08:53.631 Firmware Activation Notices: Not Supported 00:08:53.631 ANA Change Notices: Not Supported 00:08:53.631 PLE Aggregate Log Change Notices: Not Supported 00:08:53.631 LBA Status Info Alert Notices: Not Supported 00:08:53.631 EGE Aggregate Log Change Notices: Not Supported 00:08:53.631 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.631 Zone Descriptor Change Notices: Not Supported 00:08:53.631 Discovery Log Change Notices: Not Supported 00:08:53.631 Controller Attributes 00:08:53.631 128-bit Host Identifier: Not Supported 00:08:53.631 Non-Operational Permissive Mode: Not Supported 00:08:53.631 NVM Sets: Not Supported 00:08:53.631 Read Recovery Levels: Not Supported 00:08:53.631 Endurance Groups: Not Supported 00:08:53.631 Predictable Latency Mode: Not Supported 00:08:53.631 Traffic Based Keep ALive: Not Supported 00:08:53.631 Namespace Granularity: Not Supported 00:08:53.631 SQ Associations: Not Supported 00:08:53.631 UUID List: Not Supported 00:08:53.631 Multi-Domain Subsystem: Not Supported 00:08:53.631 Fixed Capacity Management: Not Supported 00:08:53.631 Variable Capacity Management: Not Supported 00:08:53.631 Delete Endurance Group: Not Supported 00:08:53.631 Delete NVM Set: Not Supported 00:08:53.631 Extended LBA Formats Supported: Supported 00:08:53.631 Flexible Data Placement Supported: Not Supported 00:08:53.631 00:08:53.631 Controller Memory Buffer Support 00:08:53.631 ================================ 00:08:53.631 Supported: No 00:08:53.631 00:08:53.631 Persistent Memory Region Support 00:08:53.631 ================================ 00:08:53.631 Supported: No 00:08:53.631 00:08:53.631 Admin Command Set Attributes 00:08:53.631 ============================ 00:08:53.631 Security Send/Receive: Not Supported 00:08:53.631 Format NVM: Supported 00:08:53.631 Firmware Activate/Download: Not Supported 00:08:53.631 Namespace Management: Supported 00:08:53.631 Device Self-Test: Not Supported 00:08:53.631 Directives: Supported 00:08:53.631 NVMe-MI: Not Supported 00:08:53.631 Virtualization Management: Not Supported 00:08:53.631 Doorbell Buffer Config: Supported 00:08:53.631 Get LBA Status Capability: Not Supported 00:08:53.631 Command & Feature Lockdown Capability: Not Supported 00:08:53.631 Abort Command Limit: 4 00:08:53.631 Async Event Request Limit: 4 00:08:53.631 Number of Firmware Slots: N/A 00:08:53.631 Firmware Slot 1 Read-Only: N/A 00:08:53.631 Firmware Activation Without Reset: N/A 00:08:53.631 Multiple Update Detection Support: N/A 00:08:53.631 Firmware Update Granularity: No Information Provided 00:08:53.631 Per-Namespace SMART Log: Yes 00:08:53.631 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.631 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:53.631 Command Effects Log Page: Supported 00:08:53.631 Get Log Page Extended Data: Supported 00:08:53.631 Telemetry Log Pages: Not Supported 00:08:53.631 Persistent Event Log Pages: Not Supported 00:08:53.631 Supported Log Pages Log Page: May Support 00:08:53.631 Commands Supported & Effects Log Page: Not Supported 00:08:53.631 Feature Identifiers & Effects Log Page:May Support 00:08:53.631 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.631 Data Area 4 for Telemetry Log: Not Supported 00:08:53.631 Error Log Page Entries Supported: 1 00:08:53.631 Keep Alive: Not Supported 00:08:53.631 00:08:53.631 NVM Command Set Attributes 00:08:53.631 ========================== 00:08:53.631 Submission Queue Entry Size 00:08:53.632 Max: 64 00:08:53.632 Min: 64 00:08:53.632 Completion Queue Entry Size 00:08:53.632 Max: 16 00:08:53.632 Min: 16 00:08:53.632 Number of Namespaces: 256 00:08:53.632 Compare Command: Supported 00:08:53.632 Write Uncorrectable Command: Not Supported 00:08:53.632 Dataset Management Command: Supported 00:08:53.632 Write Zeroes Command: Supported 00:08:53.632 Set Features Save Field: Supported 00:08:53.632 Reservations: Not Supported 00:08:53.632 Timestamp: Supported 00:08:53.632 Copy: Supported 00:08:53.632 Volatile Write Cache: Present 00:08:53.632 Atomic Write Unit (Normal): 1 00:08:53.632 Atomic Write Unit (PFail): 1 00:08:53.632 Atomic Compare & Write Unit: 1 00:08:53.632 Fused Compare & Write: Not Supported 00:08:53.632 Scatter-Gather List 00:08:53.632 SGL Command Set: Supported 00:08:53.632 SGL Keyed: Not Supported 00:08:53.632 SGL Bit Bucket Descriptor: Not Supported 00:08:53.632 SGL Metadata Pointer: Not Supported 00:08:53.632 Oversized SGL: Not Supported 00:08:53.632 SGL Metadata Address: Not Supported 00:08:53.632 SGL Offset: Not Supported 00:08:53.632 Transport SGL Data Block: Not Supported 00:08:53.632 Replay Protected Memory Block: Not Supported 00:08:53.632 00:08:53.632 Firmware Slot Information 00:08:53.632 ========================= 00:08:53.632 Active slot: 1 00:08:53.632 Slot 1 Firmware Revision: 1.0 00:08:53.632 00:08:53.632 00:08:53.632 Commands Supported and Effects 00:08:53.632 ============================== 00:08:53.632 Admin Commands 00:08:53.632 -------------- 00:08:53.632 Delete I/O Submission Queue (00h): Supported 00:08:53.632 Create I/O Submission Queue (01h): Supported 00:08:53.632 Get Log Page (02h): Supported 00:08:53.632 Delete I/O Completion Queue (04h): Supported 00:08:53.632 Create I/O Completion Queue (05h): Supported 00:08:53.632 Identify (06h): Supported 00:08:53.632 Abort (08h): Supported 00:08:53.632 Set Features (09h): Supported 00:08:53.632 Get Features (0Ah): Supported 00:08:53.632 Asynchronous Event Request (0Ch): Supported 00:08:53.632 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.632 Directive Send (19h): Supported 00:08:53.632 Directive Receive (1Ah): Supported 00:08:53.632 Virtualization Management (1Ch): Supported 00:08:53.632 Doorbell Buffer Config (7Ch): Supported 00:08:53.632 Format NVM (80h): Supported LBA-Change 00:08:53.632 I/O Commands 00:08:53.632 ------------ 00:08:53.632 Flush (00h): Supported LBA-Change 00:08:53.632 Write (01h): Supported LBA-Change 00:08:53.632 Read (02h): Supported 00:08:53.632 Compare (05h): Supported 00:08:53.632 Write Zeroes (08h): Supported LBA-Change 00:08:53.632 Dataset Management (09h): Supported LBA-Change 00:08:53.632 Unknown (0Ch): Supported 00:08:53.632 Unknown (12h): Supported 00:08:53.632 Copy (19h): Supported LBA-Change 00:08:53.632 Unknown (1Dh): Supported LBA-Change 00:08:53.632 00:08:53.632 Error Log 00:08:53.632 ========= 00:08:53.632 00:08:53.632 Arbitration 00:08:53.632 =========== 00:08:53.632 Arbitration Burst: no limit 00:08:53.632 00:08:53.632 Power Management 00:08:53.632 ================ 00:08:53.632 Number of Power States: 1 00:08:53.632 Current Power State: Power State #0 00:08:53.632 Power State #0: 00:08:53.632 Max Power: 25.00 W 00:08:53.632 Non-Operational State: Operational 00:08:53.632 Entry Latency: 16 microseconds 00:08:53.632 Exit Latency: 4 microseconds 00:08:53.632 Relative Read Throughput: 0 00:08:53.632 Relative Read Latency: 0 00:08:53.632 Relative Write Throughput: 0 00:08:53.632 Relative Write Latency: 0 00:08:53.632 Idle Power: Not Reported 00:08:53.632 Active Power: Not Reported 00:08:53.632 Non-Operational Permissive Mode: Not Supported 00:08:53.632 00:08:53.632 Health Information 00:08:53.632 ================== 00:08:53.632 Critical Warnings: 00:08:53.632 Available Spare Space: OK 00:08:53.632 Temperature: OK 00:08:53.632 Device Reliability: OK 00:08:53.632 Read Only: No 00:08:53.632 Volatile Memory Backup: OK 00:08:53.632 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.632 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.632 Available Spare: 0% 00:08:53.632 Available Spare Threshold: 0% 00:08:53.632 Life Percentage Used: 0% 00:08:53.632 Data Units Read: 795 00:08:53.632 Data Units Written: 723 00:08:53.632 Host Read Commands: 38867 00:08:53.632 Host Write Commands: 38653 00:08:53.632 Controller Busy Time: 0 minutes 00:08:53.632 Power Cycles: 0 00:08:53.632 Power On Hours: 0 hours 00:08:53.632 Unsafe Shutdowns: 0 00:08:53.632 Unrecoverable Media Errors: 0 00:08:53.632 Lifetime Error Log Entries: 0 00:08:53.632 Warning Temperature Time: 0 minutes 00:08:53.632 Critical Temperature Time: 0 minutes 00:08:53.632 00:08:53.632 Number of Queues 00:08:53.632 ================ 00:08:53.632 Number of I/O Submission Queues: 64 00:08:53.632 Number of I/O Completion Queues: 64 00:08:53.632 00:08:53.632 ZNS Specific Controller Data 00:08:53.632 ============================ 00:08:53.632 Zone Append Size Limit: 0 00:08:53.632 00:08:53.632 00:08:53.632 Active Namespaces 00:08:53.632 ================= 00:08:53.632 Namespace ID:1 00:08:53.632 Error Recovery Timeout: Unlimited 00:08:53.632 Command Set Identifier: NVM (00h) 00:08:53.632 Deallocate: Supported 00:08:53.632 Deallocated/Unwritten Error: Supported 00:08:53.632 Deallocated Read Value: All 0x00 00:08:53.632 Deallocate in Write Zeroes: Not Supported 00:08:53.632 Deallocated Guard Field: 0xFFFF 00:08:53.632 Flush: Supported 00:08:53.632 Reservation: Not Supported 00:08:53.632 Metadata Transferred as: Separate Metadata Buffer 00:08:53.632 Namespace Sharing Capabilities: Private 00:08:53.632 Size (in LBAs): 1548666 (5GiB) 00:08:53.632 Capacity (in LBAs): 1548666 (5GiB) 00:08:53.632 Utilization (in LBAs): 1548666 (5GiB) 00:08:53.632 Thin Provisioning: Not Supported 00:08:53.632 Per-NS Atomic Units: No 00:08:53.632 Maximum Single Source Range Length: 128 00:08:53.632 Maximum Copy Length: 128 00:08:53.632 Maximum Source Range Count: 128 00:08:53.632 NGUID/EUI64 Never Reused: No 00:08:53.632 Namespace Write Protected: No 00:08:53.632 Number of LBA Formats: 8 00:08:53.632 Current LBA Format: LBA Format #07 00:08:53.632 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.632 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.632 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.632 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.632 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.632 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.632 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.632 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.632 00:08:53.632 NVM Specific Namespace Data 00:08:53.632 =========================== 00:08:53.632 Logical Block Storage Tag Mask: 0 00:08:53.632 Protection Information Capabilities: 00:08:53.632 16b Guard Protection Information Storage Tag Support: No 00:08:53.632 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.632 Storage Tag Check Read Support: No 00:08:53.632 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.632 10:52:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.632 10:52:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:53.892 ===================================================== 00:08:53.892 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:53.892 ===================================================== 00:08:53.892 Controller Capabilities/Features 00:08:53.892 ================================ 00:08:53.892 Vendor ID: 1b36 00:08:53.892 Subsystem Vendor ID: 1af4 00:08:53.892 Serial Number: 12341 00:08:53.892 Model Number: QEMU NVMe Ctrl 00:08:53.892 Firmware Version: 8.0.0 00:08:53.892 Recommended Arb Burst: 6 00:08:53.892 IEEE OUI Identifier: 00 54 52 00:08:53.892 Multi-path I/O 00:08:53.892 May have multiple subsystem ports: No 00:08:53.892 May have multiple controllers: No 00:08:53.892 Associated with SR-IOV VF: No 00:08:53.892 Max Data Transfer Size: 524288 00:08:53.892 Max Number of Namespaces: 256 00:08:53.892 Max Number of I/O Queues: 64 00:08:53.892 NVMe Specification Version (VS): 1.4 00:08:53.892 NVMe Specification Version (Identify): 1.4 00:08:53.892 Maximum Queue Entries: 2048 00:08:53.892 Contiguous Queues Required: Yes 00:08:53.892 Arbitration Mechanisms Supported 00:08:53.892 Weighted Round Robin: Not Supported 00:08:53.892 Vendor Specific: Not Supported 00:08:53.892 Reset Timeout: 7500 ms 00:08:53.892 Doorbell Stride: 4 bytes 00:08:53.892 NVM Subsystem Reset: Not Supported 00:08:53.892 Command Sets Supported 00:08:53.892 NVM Command Set: Supported 00:08:53.892 Boot Partition: Not Supported 00:08:53.892 Memory Page Size Minimum: 4096 bytes 00:08:53.892 Memory Page Size Maximum: 65536 bytes 00:08:53.892 Persistent Memory Region: Not Supported 00:08:53.892 Optional Asynchronous Events Supported 00:08:53.892 Namespace Attribute Notices: Supported 00:08:53.892 Firmware Activation Notices: Not Supported 00:08:53.892 ANA Change Notices: Not Supported 00:08:53.892 PLE Aggregate Log Change Notices: Not Supported 00:08:53.892 LBA Status Info Alert Notices: Not Supported 00:08:53.893 EGE Aggregate Log Change Notices: Not Supported 00:08:53.893 Normal NVM Subsystem Shutdown event: Not Supported 00:08:53.893 Zone Descriptor Change Notices: Not Supported 00:08:53.893 Discovery Log Change Notices: Not Supported 00:08:53.893 Controller Attributes 00:08:53.893 128-bit Host Identifier: Not Supported 00:08:53.893 Non-Operational Permissive Mode: Not Supported 00:08:53.893 NVM Sets: Not Supported 00:08:53.893 Read Recovery Levels: Not Supported 00:08:53.893 Endurance Groups: Not Supported 00:08:53.893 Predictable Latency Mode: Not Supported 00:08:53.893 Traffic Based Keep ALive: Not Supported 00:08:53.893 Namespace Granularity: Not Supported 00:08:53.893 SQ Associations: Not Supported 00:08:53.893 UUID List: Not Supported 00:08:53.893 Multi-Domain Subsystem: Not Supported 00:08:53.893 Fixed Capacity Management: Not Supported 00:08:53.893 Variable Capacity Management: Not Supported 00:08:53.893 Delete Endurance Group: Not Supported 00:08:53.893 Delete NVM Set: Not Supported 00:08:53.893 Extended LBA Formats Supported: Supported 00:08:53.893 Flexible Data Placement Supported: Not Supported 00:08:53.893 00:08:53.893 Controller Memory Buffer Support 00:08:53.893 ================================ 00:08:53.893 Supported: No 00:08:53.893 00:08:53.893 Persistent Memory Region Support 00:08:53.893 ================================ 00:08:53.893 Supported: No 00:08:53.893 00:08:53.893 Admin Command Set Attributes 00:08:53.893 ============================ 00:08:53.893 Security Send/Receive: Not Supported 00:08:53.893 Format NVM: Supported 00:08:53.893 Firmware Activate/Download: Not Supported 00:08:53.893 Namespace Management: Supported 00:08:53.893 Device Self-Test: Not Supported 00:08:53.893 Directives: Supported 00:08:53.893 NVMe-MI: Not Supported 00:08:53.893 Virtualization Management: Not Supported 00:08:53.893 Doorbell Buffer Config: Supported 00:08:53.893 Get LBA Status Capability: Not Supported 00:08:53.893 Command & Feature Lockdown Capability: Not Supported 00:08:53.893 Abort Command Limit: 4 00:08:53.893 Async Event Request Limit: 4 00:08:53.893 Number of Firmware Slots: N/A 00:08:53.893 Firmware Slot 1 Read-Only: N/A 00:08:53.893 Firmware Activation Without Reset: N/A 00:08:53.893 Multiple Update Detection Support: N/A 00:08:53.893 Firmware Update Granularity: No Information Provided 00:08:53.893 Per-Namespace SMART Log: Yes 00:08:53.893 Asymmetric Namespace Access Log Page: Not Supported 00:08:53.893 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:53.893 Command Effects Log Page: Supported 00:08:53.893 Get Log Page Extended Data: Supported 00:08:53.893 Telemetry Log Pages: Not Supported 00:08:53.893 Persistent Event Log Pages: Not Supported 00:08:53.893 Supported Log Pages Log Page: May Support 00:08:53.893 Commands Supported & Effects Log Page: Not Supported 00:08:53.893 Feature Identifiers & Effects Log Page:May Support 00:08:53.893 NVMe-MI Commands & Effects Log Page: May Support 00:08:53.893 Data Area 4 for Telemetry Log: Not Supported 00:08:53.893 Error Log Page Entries Supported: 1 00:08:53.893 Keep Alive: Not Supported 00:08:53.893 00:08:53.893 NVM Command Set Attributes 00:08:53.893 ========================== 00:08:53.893 Submission Queue Entry Size 00:08:53.893 Max: 64 00:08:53.893 Min: 64 00:08:53.893 Completion Queue Entry Size 00:08:53.893 Max: 16 00:08:53.893 Min: 16 00:08:53.893 Number of Namespaces: 256 00:08:53.893 Compare Command: Supported 00:08:53.893 Write Uncorrectable Command: Not Supported 00:08:53.893 Dataset Management Command: Supported 00:08:53.893 Write Zeroes Command: Supported 00:08:53.893 Set Features Save Field: Supported 00:08:53.893 Reservations: Not Supported 00:08:53.893 Timestamp: Supported 00:08:53.893 Copy: Supported 00:08:53.893 Volatile Write Cache: Present 00:08:53.893 Atomic Write Unit (Normal): 1 00:08:53.893 Atomic Write Unit (PFail): 1 00:08:53.893 Atomic Compare & Write Unit: 1 00:08:53.893 Fused Compare & Write: Not Supported 00:08:53.893 Scatter-Gather List 00:08:53.893 SGL Command Set: Supported 00:08:53.893 SGL Keyed: Not Supported 00:08:53.893 SGL Bit Bucket Descriptor: Not Supported 00:08:53.893 SGL Metadata Pointer: Not Supported 00:08:53.893 Oversized SGL: Not Supported 00:08:53.893 SGL Metadata Address: Not Supported 00:08:53.893 SGL Offset: Not Supported 00:08:53.893 Transport SGL Data Block: Not Supported 00:08:53.893 Replay Protected Memory Block: Not Supported 00:08:53.893 00:08:53.893 Firmware Slot Information 00:08:53.893 ========================= 00:08:53.893 Active slot: 1 00:08:53.893 Slot 1 Firmware Revision: 1.0 00:08:53.893 00:08:53.893 00:08:53.893 Commands Supported and Effects 00:08:53.893 ============================== 00:08:53.893 Admin Commands 00:08:53.893 -------------- 00:08:53.893 Delete I/O Submission Queue (00h): Supported 00:08:53.893 Create I/O Submission Queue (01h): Supported 00:08:53.893 Get Log Page (02h): Supported 00:08:53.893 Delete I/O Completion Queue (04h): Supported 00:08:53.893 Create I/O Completion Queue (05h): Supported 00:08:53.893 Identify (06h): Supported 00:08:53.893 Abort (08h): Supported 00:08:53.893 Set Features (09h): Supported 00:08:53.893 Get Features (0Ah): Supported 00:08:53.893 Asynchronous Event Request (0Ch): Supported 00:08:53.893 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:53.893 Directive Send (19h): Supported 00:08:53.893 Directive Receive (1Ah): Supported 00:08:53.893 Virtualization Management (1Ch): Supported 00:08:53.893 Doorbell Buffer Config (7Ch): Supported 00:08:53.893 Format NVM (80h): Supported LBA-Change 00:08:53.893 I/O Commands 00:08:53.893 ------------ 00:08:53.893 Flush (00h): Supported LBA-Change 00:08:53.893 Write (01h): Supported LBA-Change 00:08:53.893 Read (02h): Supported 00:08:53.893 Compare (05h): Supported 00:08:53.893 Write Zeroes (08h): Supported LBA-Change 00:08:53.893 Dataset Management (09h): Supported LBA-Change 00:08:53.893 Unknown (0Ch): Supported 00:08:53.893 Unknown (12h): Supported 00:08:53.893 Copy (19h): Supported LBA-Change 00:08:53.893 Unknown (1Dh): Supported LBA-Change 00:08:53.893 00:08:53.893 Error Log 00:08:53.893 ========= 00:08:53.893 00:08:53.893 Arbitration 00:08:53.893 =========== 00:08:53.893 Arbitration Burst: no limit 00:08:53.893 00:08:53.893 Power Management 00:08:53.893 ================ 00:08:53.893 Number of Power States: 1 00:08:53.893 Current Power State: Power State #0 00:08:53.893 Power State #0: 00:08:53.893 Max Power: 25.00 W 00:08:53.893 Non-Operational State: Operational 00:08:53.893 Entry Latency: 16 microseconds 00:08:53.893 Exit Latency: 4 microseconds 00:08:53.893 Relative Read Throughput: 0 00:08:53.893 Relative Read Latency: 0 00:08:53.893 Relative Write Throughput: 0 00:08:53.893 Relative Write Latency: 0 00:08:53.893 Idle Power: Not Reported 00:08:53.893 Active Power: Not Reported 00:08:53.893 Non-Operational Permissive Mode: Not Supported 00:08:53.893 00:08:53.893 Health Information 00:08:53.893 ================== 00:08:53.893 Critical Warnings: 00:08:53.893 Available Spare Space: OK 00:08:53.893 Temperature: OK 00:08:53.893 Device Reliability: OK 00:08:53.893 Read Only: No 00:08:53.893 Volatile Memory Backup: OK 00:08:53.893 Current Temperature: 323 Kelvin (50 Celsius) 00:08:53.893 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:53.893 Available Spare: 0% 00:08:53.893 Available Spare Threshold: 0% 00:08:53.893 Life Percentage Used: 0% 00:08:53.893 Data Units Read: 1241 00:08:53.893 Data Units Written: 1102 00:08:53.893 Host Read Commands: 57438 00:08:53.893 Host Write Commands: 56137 00:08:53.893 Controller Busy Time: 0 minutes 00:08:53.893 Power Cycles: 0 00:08:53.893 Power On Hours: 0 hours 00:08:53.893 Unsafe Shutdowns: 0 00:08:53.893 Unrecoverable Media Errors: 0 00:08:53.893 Lifetime Error Log Entries: 0 00:08:53.893 Warning Temperature Time: 0 minutes 00:08:53.893 Critical Temperature Time: 0 minutes 00:08:53.893 00:08:53.893 Number of Queues 00:08:53.893 ================ 00:08:53.893 Number of I/O Submission Queues: 64 00:08:53.893 Number of I/O Completion Queues: 64 00:08:53.893 00:08:53.893 ZNS Specific Controller Data 00:08:53.893 ============================ 00:08:53.893 Zone Append Size Limit: 0 00:08:53.893 00:08:53.893 00:08:53.893 Active Namespaces 00:08:53.893 ================= 00:08:53.893 Namespace ID:1 00:08:53.893 Error Recovery Timeout: Unlimited 00:08:53.893 Command Set Identifier: NVM (00h) 00:08:53.893 Deallocate: Supported 00:08:53.893 Deallocated/Unwritten Error: Supported 00:08:53.893 Deallocated Read Value: All 0x00 00:08:53.894 Deallocate in Write Zeroes: Not Supported 00:08:53.894 Deallocated Guard Field: 0xFFFF 00:08:53.894 Flush: Supported 00:08:53.894 Reservation: Not Supported 00:08:53.894 Namespace Sharing Capabilities: Private 00:08:53.894 Size (in LBAs): 1310720 (5GiB) 00:08:53.894 Capacity (in LBAs): 1310720 (5GiB) 00:08:53.894 Utilization (in LBAs): 1310720 (5GiB) 00:08:53.894 Thin Provisioning: Not Supported 00:08:53.894 Per-NS Atomic Units: No 00:08:53.894 Maximum Single Source Range Length: 128 00:08:53.894 Maximum Copy Length: 128 00:08:53.894 Maximum Source Range Count: 128 00:08:53.894 NGUID/EUI64 Never Reused: No 00:08:53.894 Namespace Write Protected: No 00:08:53.894 Number of LBA Formats: 8 00:08:53.894 Current LBA Format: LBA Format #04 00:08:53.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:53.894 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:53.894 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:53.894 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:53.894 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:53.894 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:53.894 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:53.894 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:53.894 00:08:53.894 NVM Specific Namespace Data 00:08:53.894 =========================== 00:08:53.894 Logical Block Storage Tag Mask: 0 00:08:53.894 Protection Information Capabilities: 00:08:53.894 16b Guard Protection Information Storage Tag Support: No 00:08:53.894 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:53.894 Storage Tag Check Read Support: No 00:08:53.894 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:53.894 10:52:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:53.894 10:52:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:54.154 ===================================================== 00:08:54.154 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:54.154 ===================================================== 00:08:54.154 Controller Capabilities/Features 00:08:54.154 ================================ 00:08:54.154 Vendor ID: 1b36 00:08:54.154 Subsystem Vendor ID: 1af4 00:08:54.154 Serial Number: 12342 00:08:54.154 Model Number: QEMU NVMe Ctrl 00:08:54.154 Firmware Version: 8.0.0 00:08:54.154 Recommended Arb Burst: 6 00:08:54.154 IEEE OUI Identifier: 00 54 52 00:08:54.154 Multi-path I/O 00:08:54.154 May have multiple subsystem ports: No 00:08:54.154 May have multiple controllers: No 00:08:54.154 Associated with SR-IOV VF: No 00:08:54.154 Max Data Transfer Size: 524288 00:08:54.154 Max Number of Namespaces: 256 00:08:54.154 Max Number of I/O Queues: 64 00:08:54.154 NVMe Specification Version (VS): 1.4 00:08:54.154 NVMe Specification Version (Identify): 1.4 00:08:54.154 Maximum Queue Entries: 2048 00:08:54.154 Contiguous Queues Required: Yes 00:08:54.154 Arbitration Mechanisms Supported 00:08:54.154 Weighted Round Robin: Not Supported 00:08:54.154 Vendor Specific: Not Supported 00:08:54.154 Reset Timeout: 7500 ms 00:08:54.154 Doorbell Stride: 4 bytes 00:08:54.154 NVM Subsystem Reset: Not Supported 00:08:54.154 Command Sets Supported 00:08:54.154 NVM Command Set: Supported 00:08:54.154 Boot Partition: Not Supported 00:08:54.154 Memory Page Size Minimum: 4096 bytes 00:08:54.154 Memory Page Size Maximum: 65536 bytes 00:08:54.154 Persistent Memory Region: Not Supported 00:08:54.154 Optional Asynchronous Events Supported 00:08:54.154 Namespace Attribute Notices: Supported 00:08:54.154 Firmware Activation Notices: Not Supported 00:08:54.154 ANA Change Notices: Not Supported 00:08:54.154 PLE Aggregate Log Change Notices: Not Supported 00:08:54.154 LBA Status Info Alert Notices: Not Supported 00:08:54.154 EGE Aggregate Log Change Notices: Not Supported 00:08:54.154 Normal NVM Subsystem Shutdown event: Not Supported 00:08:54.154 Zone Descriptor Change Notices: Not Supported 00:08:54.154 Discovery Log Change Notices: Not Supported 00:08:54.154 Controller Attributes 00:08:54.154 128-bit Host Identifier: Not Supported 00:08:54.154 Non-Operational Permissive Mode: Not Supported 00:08:54.155 NVM Sets: Not Supported 00:08:54.155 Read Recovery Levels: Not Supported 00:08:54.155 Endurance Groups: Not Supported 00:08:54.155 Predictable Latency Mode: Not Supported 00:08:54.155 Traffic Based Keep ALive: Not Supported 00:08:54.155 Namespace Granularity: Not Supported 00:08:54.155 SQ Associations: Not Supported 00:08:54.155 UUID List: Not Supported 00:08:54.155 Multi-Domain Subsystem: Not Supported 00:08:54.155 Fixed Capacity Management: Not Supported 00:08:54.155 Variable Capacity Management: Not Supported 00:08:54.155 Delete Endurance Group: Not Supported 00:08:54.155 Delete NVM Set: Not Supported 00:08:54.155 Extended LBA Formats Supported: Supported 00:08:54.155 Flexible Data Placement Supported: Not Supported 00:08:54.155 00:08:54.155 Controller Memory Buffer Support 00:08:54.155 ================================ 00:08:54.155 Supported: No 00:08:54.155 00:08:54.155 Persistent Memory Region Support 00:08:54.155 ================================ 00:08:54.155 Supported: No 00:08:54.155 00:08:54.155 Admin Command Set Attributes 00:08:54.155 ============================ 00:08:54.155 Security Send/Receive: Not Supported 00:08:54.155 Format NVM: Supported 00:08:54.155 Firmware Activate/Download: Not Supported 00:08:54.155 Namespace Management: Supported 00:08:54.155 Device Self-Test: Not Supported 00:08:54.155 Directives: Supported 00:08:54.155 NVMe-MI: Not Supported 00:08:54.155 Virtualization Management: Not Supported 00:08:54.155 Doorbell Buffer Config: Supported 00:08:54.155 Get LBA Status Capability: Not Supported 00:08:54.155 Command & Feature Lockdown Capability: Not Supported 00:08:54.155 Abort Command Limit: 4 00:08:54.155 Async Event Request Limit: 4 00:08:54.155 Number of Firmware Slots: N/A 00:08:54.155 Firmware Slot 1 Read-Only: N/A 00:08:54.155 Firmware Activation Without Reset: N/A 00:08:54.155 Multiple Update Detection Support: N/A 00:08:54.155 Firmware Update Granularity: No Information Provided 00:08:54.155 Per-Namespace SMART Log: Yes 00:08:54.155 Asymmetric Namespace Access Log Page: Not Supported 00:08:54.155 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:54.155 Command Effects Log Page: Supported 00:08:54.155 Get Log Page Extended Data: Supported 00:08:54.155 Telemetry Log Pages: Not Supported 00:08:54.155 Persistent Event Log Pages: Not Supported 00:08:54.155 Supported Log Pages Log Page: May Support 00:08:54.155 Commands Supported & Effects Log Page: Not Supported 00:08:54.155 Feature Identifiers & Effects Log Page:May Support 00:08:54.155 NVMe-MI Commands & Effects Log Page: May Support 00:08:54.155 Data Area 4 for Telemetry Log: Not Supported 00:08:54.155 Error Log Page Entries Supported: 1 00:08:54.155 Keep Alive: Not Supported 00:08:54.155 00:08:54.155 NVM Command Set Attributes 00:08:54.155 ========================== 00:08:54.155 Submission Queue Entry Size 00:08:54.155 Max: 64 00:08:54.155 Min: 64 00:08:54.155 Completion Queue Entry Size 00:08:54.155 Max: 16 00:08:54.155 Min: 16 00:08:54.155 Number of Namespaces: 256 00:08:54.155 Compare Command: Supported 00:08:54.155 Write Uncorrectable Command: Not Supported 00:08:54.155 Dataset Management Command: Supported 00:08:54.155 Write Zeroes Command: Supported 00:08:54.155 Set Features Save Field: Supported 00:08:54.155 Reservations: Not Supported 00:08:54.155 Timestamp: Supported 00:08:54.155 Copy: Supported 00:08:54.155 Volatile Write Cache: Present 00:08:54.155 Atomic Write Unit (Normal): 1 00:08:54.155 Atomic Write Unit (PFail): 1 00:08:54.155 Atomic Compare & Write Unit: 1 00:08:54.155 Fused Compare & Write: Not Supported 00:08:54.155 Scatter-Gather List 00:08:54.155 SGL Command Set: Supported 00:08:54.155 SGL Keyed: Not Supported 00:08:54.155 SGL Bit Bucket Descriptor: Not Supported 00:08:54.155 SGL Metadata Pointer: Not Supported 00:08:54.155 Oversized SGL: Not Supported 00:08:54.155 SGL Metadata Address: Not Supported 00:08:54.155 SGL Offset: Not Supported 00:08:54.155 Transport SGL Data Block: Not Supported 00:08:54.155 Replay Protected Memory Block: Not Supported 00:08:54.155 00:08:54.155 Firmware Slot Information 00:08:54.155 ========================= 00:08:54.155 Active slot: 1 00:08:54.155 Slot 1 Firmware Revision: 1.0 00:08:54.155 00:08:54.155 00:08:54.155 Commands Supported and Effects 00:08:54.155 ============================== 00:08:54.155 Admin Commands 00:08:54.155 -------------- 00:08:54.155 Delete I/O Submission Queue (00h): Supported 00:08:54.155 Create I/O Submission Queue (01h): Supported 00:08:54.155 Get Log Page (02h): Supported 00:08:54.155 Delete I/O Completion Queue (04h): Supported 00:08:54.155 Create I/O Completion Queue (05h): Supported 00:08:54.155 Identify (06h): Supported 00:08:54.155 Abort (08h): Supported 00:08:54.155 Set Features (09h): Supported 00:08:54.155 Get Features (0Ah): Supported 00:08:54.155 Asynchronous Event Request (0Ch): Supported 00:08:54.155 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:54.155 Directive Send (19h): Supported 00:08:54.155 Directive Receive (1Ah): Supported 00:08:54.155 Virtualization Management (1Ch): Supported 00:08:54.155 Doorbell Buffer Config (7Ch): Supported 00:08:54.155 Format NVM (80h): Supported LBA-Change 00:08:54.155 I/O Commands 00:08:54.155 ------------ 00:08:54.155 Flush (00h): Supported LBA-Change 00:08:54.155 Write (01h): Supported LBA-Change 00:08:54.155 Read (02h): Supported 00:08:54.155 Compare (05h): Supported 00:08:54.155 Write Zeroes (08h): Supported LBA-Change 00:08:54.155 Dataset Management (09h): Supported LBA-Change 00:08:54.155 Unknown (0Ch): Supported 00:08:54.155 Unknown (12h): Supported 00:08:54.155 Copy (19h): Supported LBA-Change 00:08:54.155 Unknown (1Dh): Supported LBA-Change 00:08:54.155 00:08:54.155 Error Log 00:08:54.155 ========= 00:08:54.155 00:08:54.155 Arbitration 00:08:54.155 =========== 00:08:54.155 Arbitration Burst: no limit 00:08:54.155 00:08:54.155 Power Management 00:08:54.155 ================ 00:08:54.155 Number of Power States: 1 00:08:54.155 Current Power State: Power State #0 00:08:54.155 Power State #0: 00:08:54.155 Max Power: 25.00 W 00:08:54.155 Non-Operational State: Operational 00:08:54.155 Entry Latency: 16 microseconds 00:08:54.155 Exit Latency: 4 microseconds 00:08:54.155 Relative Read Throughput: 0 00:08:54.155 Relative Read Latency: 0 00:08:54.155 Relative Write Throughput: 0 00:08:54.155 Relative Write Latency: 0 00:08:54.155 Idle Power: Not Reported 00:08:54.155 Active Power: Not Reported 00:08:54.155 Non-Operational Permissive Mode: Not Supported 00:08:54.155 00:08:54.155 Health Information 00:08:54.155 ================== 00:08:54.155 Critical Warnings: 00:08:54.155 Available Spare Space: OK 00:08:54.155 Temperature: OK 00:08:54.155 Device Reliability: OK 00:08:54.156 Read Only: No 00:08:54.156 Volatile Memory Backup: OK 00:08:54.156 Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.156 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:54.156 Available Spare: 0% 00:08:54.156 Available Spare Threshold: 0% 00:08:54.156 Life Percentage Used: 0% 00:08:54.156 Data Units Read: 2531 00:08:54.156 Data Units Written: 2318 00:08:54.156 Host Read Commands: 118834 00:08:54.156 Host Write Commands: 117103 00:08:54.156 Controller Busy Time: 0 minutes 00:08:54.156 Power Cycles: 0 00:08:54.156 Power On Hours: 0 hours 00:08:54.156 Unsafe Shutdowns: 0 00:08:54.156 Unrecoverable Media Errors: 0 00:08:54.156 Lifetime Error Log Entries: 0 00:08:54.156 Warning Temperature Time: 0 minutes 00:08:54.156 Critical Temperature Time: 0 minutes 00:08:54.156 00:08:54.156 Number of Queues 00:08:54.156 ================ 00:08:54.156 Number of I/O Submission Queues: 64 00:08:54.156 Number of I/O Completion Queues: 64 00:08:54.156 00:08:54.156 ZNS Specific Controller Data 00:08:54.156 ============================ 00:08:54.156 Zone Append Size Limit: 0 00:08:54.156 00:08:54.156 00:08:54.156 Active Namespaces 00:08:54.156 ================= 00:08:54.156 Namespace ID:1 00:08:54.156 Error Recovery Timeout: Unlimited 00:08:54.156 Command Set Identifier: NVM (00h) 00:08:54.156 Deallocate: Supported 00:08:54.156 Deallocated/Unwritten Error: Supported 00:08:54.156 Deallocated Read Value: All 0x00 00:08:54.156 Deallocate in Write Zeroes: Not Supported 00:08:54.156 Deallocated Guard Field: 0xFFFF 00:08:54.156 Flush: Supported 00:08:54.156 Reservation: Not Supported 00:08:54.156 Namespace Sharing Capabilities: Private 00:08:54.156 Size (in LBAs): 1048576 (4GiB) 00:08:54.156 Capacity (in LBAs): 1048576 (4GiB) 00:08:54.156 Utilization (in LBAs): 1048576 (4GiB) 00:08:54.156 Thin Provisioning: Not Supported 00:08:54.156 Per-NS Atomic Units: No 00:08:54.156 Maximum Single Source Range Length: 128 00:08:54.156 Maximum Copy Length: 128 00:08:54.156 Maximum Source Range Count: 128 00:08:54.156 NGUID/EUI64 Never Reused: No 00:08:54.156 Namespace Write Protected: No 00:08:54.156 Number of LBA Formats: 8 00:08:54.156 Current LBA Format: LBA Format #04 00:08:54.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:54.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:54.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:54.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:54.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:54.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:54.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:54.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:54.156 00:08:54.156 NVM Specific Namespace Data 00:08:54.156 =========================== 00:08:54.156 Logical Block Storage Tag Mask: 0 00:08:54.156 Protection Information Capabilities: 00:08:54.156 16b Guard Protection Information Storage Tag Support: No 00:08:54.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:54.156 Storage Tag Check Read Support: No 00:08:54.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Namespace ID:2 00:08:54.156 Error Recovery Timeout: Unlimited 00:08:54.156 Command Set Identifier: NVM (00h) 00:08:54.156 Deallocate: Supported 00:08:54.156 Deallocated/Unwritten Error: Supported 00:08:54.156 Deallocated Read Value: All 0x00 00:08:54.156 Deallocate in Write Zeroes: Not Supported 00:08:54.156 Deallocated Guard Field: 0xFFFF 00:08:54.156 Flush: Supported 00:08:54.156 Reservation: Not Supported 00:08:54.156 Namespace Sharing Capabilities: Private 00:08:54.156 Size (in LBAs): 1048576 (4GiB) 00:08:54.156 Capacity (in LBAs): 1048576 (4GiB) 00:08:54.156 Utilization (in LBAs): 1048576 (4GiB) 00:08:54.156 Thin Provisioning: Not Supported 00:08:54.156 Per-NS Atomic Units: No 00:08:54.156 Maximum Single Source Range Length: 128 00:08:54.156 Maximum Copy Length: 128 00:08:54.156 Maximum Source Range Count: 128 00:08:54.156 NGUID/EUI64 Never Reused: No 00:08:54.156 Namespace Write Protected: No 00:08:54.156 Number of LBA Formats: 8 00:08:54.156 Current LBA Format: LBA Format #04 00:08:54.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:54.156 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:54.156 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:54.156 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:54.156 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:54.156 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:54.156 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:54.156 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:54.156 00:08:54.156 NVM Specific Namespace Data 00:08:54.156 =========================== 00:08:54.156 Logical Block Storage Tag Mask: 0 00:08:54.156 Protection Information Capabilities: 00:08:54.156 16b Guard Protection Information Storage Tag Support: No 00:08:54.156 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:54.156 Storage Tag Check Read Support: No 00:08:54.156 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.156 Namespace ID:3 00:08:54.156 Error Recovery Timeout: Unlimited 00:08:54.156 Command Set Identifier: NVM (00h) 00:08:54.156 Deallocate: Supported 00:08:54.156 Deallocated/Unwritten Error: Supported 00:08:54.156 Deallocated Read Value: All 0x00 00:08:54.156 Deallocate in Write Zeroes: Not Supported 00:08:54.156 Deallocated Guard Field: 0xFFFF 00:08:54.156 Flush: Supported 00:08:54.156 Reservation: Not Supported 00:08:54.156 Namespace Sharing Capabilities: Private 00:08:54.156 Size (in LBAs): 1048576 (4GiB) 00:08:54.156 Capacity (in LBAs): 1048576 (4GiB) 00:08:54.156 Utilization (in LBAs): 1048576 (4GiB) 00:08:54.156 Thin Provisioning: Not Supported 00:08:54.157 Per-NS Atomic Units: No 00:08:54.157 Maximum Single Source Range Length: 128 00:08:54.157 Maximum Copy Length: 128 00:08:54.157 Maximum Source Range Count: 128 00:08:54.157 NGUID/EUI64 Never Reused: No 00:08:54.157 Namespace Write Protected: No 00:08:54.157 Number of LBA Formats: 8 00:08:54.157 Current LBA Format: LBA Format #04 00:08:54.157 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:54.157 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:54.157 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:54.157 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:54.157 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:54.157 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:54.157 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:54.157 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:54.157 00:08:54.157 NVM Specific Namespace Data 00:08:54.157 =========================== 00:08:54.157 Logical Block Storage Tag Mask: 0 00:08:54.157 Protection Information Capabilities: 00:08:54.157 16b Guard Protection Information Storage Tag Support: No 00:08:54.157 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:54.416 Storage Tag Check Read Support: No 00:08:54.416 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.416 10:52:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:54.416 10:52:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:54.676 ===================================================== 00:08:54.677 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:54.677 ===================================================== 00:08:54.677 Controller Capabilities/Features 00:08:54.677 ================================ 00:08:54.677 Vendor ID: 1b36 00:08:54.677 Subsystem Vendor ID: 1af4 00:08:54.677 Serial Number: 12343 00:08:54.677 Model Number: QEMU NVMe Ctrl 00:08:54.677 Firmware Version: 8.0.0 00:08:54.677 Recommended Arb Burst: 6 00:08:54.677 IEEE OUI Identifier: 00 54 52 00:08:54.677 Multi-path I/O 00:08:54.677 May have multiple subsystem ports: No 00:08:54.677 May have multiple controllers: Yes 00:08:54.677 Associated with SR-IOV VF: No 00:08:54.677 Max Data Transfer Size: 524288 00:08:54.677 Max Number of Namespaces: 256 00:08:54.677 Max Number of I/O Queues: 64 00:08:54.677 NVMe Specification Version (VS): 1.4 00:08:54.677 NVMe Specification Version (Identify): 1.4 00:08:54.677 Maximum Queue Entries: 2048 00:08:54.677 Contiguous Queues Required: Yes 00:08:54.677 Arbitration Mechanisms Supported 00:08:54.677 Weighted Round Robin: Not Supported 00:08:54.677 Vendor Specific: Not Supported 00:08:54.677 Reset Timeout: 7500 ms 00:08:54.677 Doorbell Stride: 4 bytes 00:08:54.677 NVM Subsystem Reset: Not Supported 00:08:54.677 Command Sets Supported 00:08:54.677 NVM Command Set: Supported 00:08:54.677 Boot Partition: Not Supported 00:08:54.677 Memory Page Size Minimum: 4096 bytes 00:08:54.677 Memory Page Size Maximum: 65536 bytes 00:08:54.677 Persistent Memory Region: Not Supported 00:08:54.677 Optional Asynchronous Events Supported 00:08:54.677 Namespace Attribute Notices: Supported 00:08:54.677 Firmware Activation Notices: Not Supported 00:08:54.677 ANA Change Notices: Not Supported 00:08:54.677 PLE Aggregate Log Change Notices: Not Supported 00:08:54.677 LBA Status Info Alert Notices: Not Supported 00:08:54.677 EGE Aggregate Log Change Notices: Not Supported 00:08:54.677 Normal NVM Subsystem Shutdown event: Not Supported 00:08:54.677 Zone Descriptor Change Notices: Not Supported 00:08:54.677 Discovery Log Change Notices: Not Supported 00:08:54.677 Controller Attributes 00:08:54.677 128-bit Host Identifier: Not Supported 00:08:54.677 Non-Operational Permissive Mode: Not Supported 00:08:54.677 NVM Sets: Not Supported 00:08:54.677 Read Recovery Levels: Not Supported 00:08:54.677 Endurance Groups: Supported 00:08:54.677 Predictable Latency Mode: Not Supported 00:08:54.677 Traffic Based Keep ALive: Not Supported 00:08:54.677 Namespace Granularity: Not Supported 00:08:54.677 SQ Associations: Not Supported 00:08:54.677 UUID List: Not Supported 00:08:54.677 Multi-Domain Subsystem: Not Supported 00:08:54.677 Fixed Capacity Management: Not Supported 00:08:54.677 Variable Capacity Management: Not Supported 00:08:54.677 Delete Endurance Group: Not Supported 00:08:54.677 Delete NVM Set: Not Supported 00:08:54.677 Extended LBA Formats Supported: Supported 00:08:54.677 Flexible Data Placement Supported: Supported 00:08:54.677 00:08:54.677 Controller Memory Buffer Support 00:08:54.677 ================================ 00:08:54.677 Supported: No 00:08:54.677 00:08:54.677 Persistent Memory Region Support 00:08:54.677 ================================ 00:08:54.677 Supported: No 00:08:54.677 00:08:54.677 Admin Command Set Attributes 00:08:54.677 ============================ 00:08:54.677 Security Send/Receive: Not Supported 00:08:54.677 Format NVM: Supported 00:08:54.677 Firmware Activate/Download: Not Supported 00:08:54.677 Namespace Management: Supported 00:08:54.677 Device Self-Test: Not Supported 00:08:54.677 Directives: Supported 00:08:54.677 NVMe-MI: Not Supported 00:08:54.677 Virtualization Management: Not Supported 00:08:54.677 Doorbell Buffer Config: Supported 00:08:54.677 Get LBA Status Capability: Not Supported 00:08:54.677 Command & Feature Lockdown Capability: Not Supported 00:08:54.677 Abort Command Limit: 4 00:08:54.677 Async Event Request Limit: 4 00:08:54.677 Number of Firmware Slots: N/A 00:08:54.677 Firmware Slot 1 Read-Only: N/A 00:08:54.677 Firmware Activation Without Reset: N/A 00:08:54.677 Multiple Update Detection Support: N/A 00:08:54.677 Firmware Update Granularity: No Information Provided 00:08:54.677 Per-Namespace SMART Log: Yes 00:08:54.677 Asymmetric Namespace Access Log Page: Not Supported 00:08:54.677 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:54.677 Command Effects Log Page: Supported 00:08:54.677 Get Log Page Extended Data: Supported 00:08:54.677 Telemetry Log Pages: Not Supported 00:08:54.677 Persistent Event Log Pages: Not Supported 00:08:54.677 Supported Log Pages Log Page: May Support 00:08:54.677 Commands Supported & Effects Log Page: Not Supported 00:08:54.677 Feature Identifiers & Effects Log Page:May Support 00:08:54.677 NVMe-MI Commands & Effects Log Page: May Support 00:08:54.677 Data Area 4 for Telemetry Log: Not Supported 00:08:54.677 Error Log Page Entries Supported: 1 00:08:54.677 Keep Alive: Not Supported 00:08:54.677 00:08:54.677 NVM Command Set Attributes 00:08:54.677 ========================== 00:08:54.677 Submission Queue Entry Size 00:08:54.677 Max: 64 00:08:54.677 Min: 64 00:08:54.677 Completion Queue Entry Size 00:08:54.677 Max: 16 00:08:54.677 Min: 16 00:08:54.677 Number of Namespaces: 256 00:08:54.677 Compare Command: Supported 00:08:54.677 Write Uncorrectable Command: Not Supported 00:08:54.677 Dataset Management Command: Supported 00:08:54.677 Write Zeroes Command: Supported 00:08:54.677 Set Features Save Field: Supported 00:08:54.677 Reservations: Not Supported 00:08:54.677 Timestamp: Supported 00:08:54.677 Copy: Supported 00:08:54.677 Volatile Write Cache: Present 00:08:54.677 Atomic Write Unit (Normal): 1 00:08:54.677 Atomic Write Unit (PFail): 1 00:08:54.677 Atomic Compare & Write Unit: 1 00:08:54.677 Fused Compare & Write: Not Supported 00:08:54.677 Scatter-Gather List 00:08:54.677 SGL Command Set: Supported 00:08:54.677 SGL Keyed: Not Supported 00:08:54.677 SGL Bit Bucket Descriptor: Not Supported 00:08:54.677 SGL Metadata Pointer: Not Supported 00:08:54.677 Oversized SGL: Not Supported 00:08:54.677 SGL Metadata Address: Not Supported 00:08:54.677 SGL Offset: Not Supported 00:08:54.677 Transport SGL Data Block: Not Supported 00:08:54.677 Replay Protected Memory Block: Not Supported 00:08:54.677 00:08:54.677 Firmware Slot Information 00:08:54.677 ========================= 00:08:54.677 Active slot: 1 00:08:54.677 Slot 1 Firmware Revision: 1.0 00:08:54.677 00:08:54.677 00:08:54.677 Commands Supported and Effects 00:08:54.677 ============================== 00:08:54.677 Admin Commands 00:08:54.677 -------------- 00:08:54.677 Delete I/O Submission Queue (00h): Supported 00:08:54.677 Create I/O Submission Queue (01h): Supported 00:08:54.677 Get Log Page (02h): Supported 00:08:54.677 Delete I/O Completion Queue (04h): Supported 00:08:54.677 Create I/O Completion Queue (05h): Supported 00:08:54.677 Identify (06h): Supported 00:08:54.677 Abort (08h): Supported 00:08:54.677 Set Features (09h): Supported 00:08:54.677 Get Features (0Ah): Supported 00:08:54.677 Asynchronous Event Request (0Ch): Supported 00:08:54.677 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:54.677 Directive Send (19h): Supported 00:08:54.677 Directive Receive (1Ah): Supported 00:08:54.677 Virtualization Management (1Ch): Supported 00:08:54.677 Doorbell Buffer Config (7Ch): Supported 00:08:54.677 Format NVM (80h): Supported LBA-Change 00:08:54.677 I/O Commands 00:08:54.677 ------------ 00:08:54.678 Flush (00h): Supported LBA-Change 00:08:54.678 Write (01h): Supported LBA-Change 00:08:54.678 Read (02h): Supported 00:08:54.678 Compare (05h): Supported 00:08:54.678 Write Zeroes (08h): Supported LBA-Change 00:08:54.678 Dataset Management (09h): Supported LBA-Change 00:08:54.678 Unknown (0Ch): Supported 00:08:54.678 Unknown (12h): Supported 00:08:54.678 Copy (19h): Supported LBA-Change 00:08:54.678 Unknown (1Dh): Supported LBA-Change 00:08:54.678 00:08:54.678 Error Log 00:08:54.678 ========= 00:08:54.678 00:08:54.678 Arbitration 00:08:54.678 =========== 00:08:54.678 Arbitration Burst: no limit 00:08:54.678 00:08:54.678 Power Management 00:08:54.678 ================ 00:08:54.678 Number of Power States: 1 00:08:54.678 Current Power State: Power State #0 00:08:54.678 Power State #0: 00:08:54.678 Max Power: 25.00 W 00:08:54.678 Non-Operational State: Operational 00:08:54.678 Entry Latency: 16 microseconds 00:08:54.678 Exit Latency: 4 microseconds 00:08:54.678 Relative Read Throughput: 0 00:08:54.678 Relative Read Latency: 0 00:08:54.678 Relative Write Throughput: 0 00:08:54.678 Relative Write Latency: 0 00:08:54.678 Idle Power: Not Reported 00:08:54.678 Active Power: Not Reported 00:08:54.678 Non-Operational Permissive Mode: Not Supported 00:08:54.678 00:08:54.678 Health Information 00:08:54.678 ================== 00:08:54.678 Critical Warnings: 00:08:54.678 Available Spare Space: OK 00:08:54.678 Temperature: OK 00:08:54.678 Device Reliability: OK 00:08:54.678 Read Only: No 00:08:54.678 Volatile Memory Backup: OK 00:08:54.678 Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.678 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:54.678 Available Spare: 0% 00:08:54.678 Available Spare Threshold: 0% 00:08:54.678 Life Percentage Used: 0% 00:08:54.678 Data Units Read: 918 00:08:54.678 Data Units Written: 847 00:08:54.678 Host Read Commands: 40192 00:08:54.678 Host Write Commands: 39615 00:08:54.678 Controller Busy Time: 0 minutes 00:08:54.678 Power Cycles: 0 00:08:54.678 Power On Hours: 0 hours 00:08:54.678 Unsafe Shutdowns: 0 00:08:54.678 Unrecoverable Media Errors: 0 00:08:54.678 Lifetime Error Log Entries: 0 00:08:54.678 Warning Temperature Time: 0 minutes 00:08:54.678 Critical Temperature Time: 0 minutes 00:08:54.678 00:08:54.678 Number of Queues 00:08:54.678 ================ 00:08:54.678 Number of I/O Submission Queues: 64 00:08:54.678 Number of I/O Completion Queues: 64 00:08:54.678 00:08:54.678 ZNS Specific Controller Data 00:08:54.678 ============================ 00:08:54.678 Zone Append Size Limit: 0 00:08:54.678 00:08:54.678 00:08:54.678 Active Namespaces 00:08:54.678 ================= 00:08:54.678 Namespace ID:1 00:08:54.678 Error Recovery Timeout: Unlimited 00:08:54.678 Command Set Identifier: NVM (00h) 00:08:54.678 Deallocate: Supported 00:08:54.678 Deallocated/Unwritten Error: Supported 00:08:54.678 Deallocated Read Value: All 0x00 00:08:54.678 Deallocate in Write Zeroes: Not Supported 00:08:54.678 Deallocated Guard Field: 0xFFFF 00:08:54.678 Flush: Supported 00:08:54.678 Reservation: Not Supported 00:08:54.678 Namespace Sharing Capabilities: Multiple Controllers 00:08:54.678 Size (in LBAs): 262144 (1GiB) 00:08:54.678 Capacity (in LBAs): 262144 (1GiB) 00:08:54.678 Utilization (in LBAs): 262144 (1GiB) 00:08:54.678 Thin Provisioning: Not Supported 00:08:54.678 Per-NS Atomic Units: No 00:08:54.678 Maximum Single Source Range Length: 128 00:08:54.678 Maximum Copy Length: 128 00:08:54.678 Maximum Source Range Count: 128 00:08:54.678 NGUID/EUI64 Never Reused: No 00:08:54.678 Namespace Write Protected: No 00:08:54.678 Endurance group ID: 1 00:08:54.678 Number of LBA Formats: 8 00:08:54.678 Current LBA Format: LBA Format #04 00:08:54.678 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:54.678 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:54.678 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:54.678 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:54.678 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:54.678 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:54.678 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:54.678 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:54.678 00:08:54.678 Get Feature FDP: 00:08:54.678 ================ 00:08:54.678 Enabled: Yes 00:08:54.678 FDP configuration index: 0 00:08:54.678 00:08:54.678 FDP configurations log page 00:08:54.678 =========================== 00:08:54.678 Number of FDP configurations: 1 00:08:54.678 Version: 0 00:08:54.678 Size: 112 00:08:54.678 FDP Configuration Descriptor: 0 00:08:54.678 Descriptor Size: 96 00:08:54.678 Reclaim Group Identifier format: 2 00:08:54.678 FDP Volatile Write Cache: Not Present 00:08:54.678 FDP Configuration: Valid 00:08:54.678 Vendor Specific Size: 0 00:08:54.678 Number of Reclaim Groups: 2 00:08:54.678 Number of Recalim Unit Handles: 8 00:08:54.678 Max Placement Identifiers: 128 00:08:54.678 Number of Namespaces Suppprted: 256 00:08:54.678 Reclaim unit Nominal Size: 6000000 bytes 00:08:54.678 Estimated Reclaim Unit Time Limit: Not Reported 00:08:54.678 RUH Desc #000: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #001: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #002: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #003: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #004: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #005: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #006: RUH Type: Initially Isolated 00:08:54.678 RUH Desc #007: RUH Type: Initially Isolated 00:08:54.678 00:08:54.678 FDP reclaim unit handle usage log page 00:08:54.678 ====================================== 00:08:54.678 Number of Reclaim Unit Handles: 8 00:08:54.678 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:54.678 RUH Usage Desc #001: RUH Attributes: Unused 00:08:54.678 RUH Usage Desc #002: RUH Attributes: Unused 00:08:54.678 RUH Usage Desc #003: RUH Attributes: Unused 00:08:54.678 RUH Usage Desc #004: RUH Attributes: Unused 00:08:54.678 RUH Usage Desc #005: RUH Attributes: Unused 00:08:54.678 RUH Usage Desc #006: RUH Attributes: Unused 00:08:54.678 RUH Usage Desc #007: RUH Attributes: Unused 00:08:54.678 00:08:54.678 FDP statistics log page 00:08:54.678 ======================= 00:08:54.678 Host bytes with metadata written: 546086912 00:08:54.678 Media bytes with metadata written: 546164736 00:08:54.678 Media bytes erased: 0 00:08:54.678 00:08:54.678 FDP events log page 00:08:54.679 =================== 00:08:54.679 Number of FDP events: 0 00:08:54.679 00:08:54.679 NVM Specific Namespace Data 00:08:54.679 =========================== 00:08:54.679 Logical Block Storage Tag Mask: 0 00:08:54.679 Protection Information Capabilities: 00:08:54.679 16b Guard Protection Information Storage Tag Support: No 00:08:54.679 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:54.679 Storage Tag Check Read Support: No 00:08:54.679 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:54.679 00:08:54.679 real 0m1.689s 00:08:54.679 user 0m0.616s 00:08:54.679 sys 0m0.872s 00:08:54.679 10:52:41 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.679 10:52:41 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:54.679 ************************************ 00:08:54.679 END TEST nvme_identify 00:08:54.679 ************************************ 00:08:54.679 10:52:41 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:54.679 10:52:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.679 10:52:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.679 10:52:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.679 ************************************ 00:08:54.679 START TEST nvme_perf 00:08:54.679 ************************************ 00:08:54.679 10:52:41 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:54.679 10:52:41 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:56.133 Initializing NVMe Controllers 00:08:56.133 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:56.133 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:56.133 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:56.133 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:56.133 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:56.133 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:56.133 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:56.133 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:56.133 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:56.133 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:56.133 Initialization complete. Launching workers. 00:08:56.133 ======================================================== 00:08:56.133 Latency(us) 00:08:56.133 Device Information : IOPS MiB/s Average min max 00:08:56.133 PCIE (0000:00:10.0) NSID 1 from core 0: 14192.05 166.31 9038.14 7880.14 52168.51 00:08:56.133 PCIE (0000:00:11.0) NSID 1 from core 0: 14192.05 166.31 9020.90 7922.87 49853.74 00:08:56.133 PCIE (0000:00:13.0) NSID 1 from core 0: 14192.05 166.31 9002.18 7976.09 47963.80 00:08:56.133 PCIE (0000:00:12.0) NSID 1 from core 0: 14192.05 166.31 8982.75 7988.06 45585.30 00:08:56.133 PCIE (0000:00:12.0) NSID 2 from core 0: 14192.05 166.31 8963.67 7955.35 43185.77 00:08:56.133 PCIE (0000:00:12.0) NSID 3 from core 0: 14255.98 167.06 8904.21 7934.25 35755.86 00:08:56.133 ======================================================== 00:08:56.133 Total : 85216.22 998.63 8985.25 7880.14 52168.51 00:08:56.133 00:08:56.133 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.133 ================================================================================= 00:08:56.133 1.00000% : 8053.822us 00:08:56.133 10.00000% : 8264.379us 00:08:56.133 25.00000% : 8422.297us 00:08:56.133 50.00000% : 8685.494us 00:08:56.133 75.00000% : 8948.691us 00:08:56.134 90.00000% : 9159.248us 00:08:56.134 95.00000% : 9369.806us 00:08:56.134 98.00000% : 10001.478us 00:08:56.134 99.00000% : 11475.380us 00:08:56.134 99.50000% : 45059.290us 00:08:56.134 99.90000% : 51797.128us 00:08:56.134 99.99000% : 52218.243us 00:08:56.134 99.99900% : 52218.243us 00:08:56.134 99.99990% : 52218.243us 00:08:56.134 99.99999% : 52218.243us 00:08:56.134 00:08:56.134 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.134 ================================================================================= 00:08:56.134 1.00000% : 8159.100us 00:08:56.134 10.00000% : 8317.018us 00:08:56.134 25.00000% : 8474.937us 00:08:56.134 50.00000% : 8685.494us 00:08:56.134 75.00000% : 8896.051us 00:08:56.134 90.00000% : 9106.609us 00:08:56.134 95.00000% : 9369.806us 00:08:56.134 98.00000% : 10001.478us 00:08:56.134 99.00000% : 11791.216us 00:08:56.134 99.50000% : 42953.716us 00:08:56.134 99.90000% : 49480.996us 00:08:56.134 99.99000% : 49902.111us 00:08:56.134 99.99900% : 49902.111us 00:08:56.134 99.99990% : 49902.111us 00:08:56.134 99.99999% : 49902.111us 00:08:56.134 00:08:56.134 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.134 ================================================================================= 00:08:56.134 1.00000% : 8159.100us 00:08:56.134 10.00000% : 8317.018us 00:08:56.134 25.00000% : 8474.937us 00:08:56.134 50.00000% : 8685.494us 00:08:56.134 75.00000% : 8896.051us 00:08:56.134 90.00000% : 9106.609us 00:08:56.134 95.00000% : 9369.806us 00:08:56.134 98.00000% : 9948.839us 00:08:56.134 99.00000% : 11580.659us 00:08:56.134 99.50000% : 41058.699us 00:08:56.134 99.90000% : 47585.979us 00:08:56.134 99.99000% : 48007.094us 00:08:56.134 99.99900% : 48007.094us 00:08:56.134 99.99990% : 48007.094us 00:08:56.134 99.99999% : 48007.094us 00:08:56.134 00:08:56.134 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.134 ================================================================================= 00:08:56.134 1.00000% : 8106.461us 00:08:56.134 10.00000% : 8317.018us 00:08:56.134 25.00000% : 8474.937us 00:08:56.134 50.00000% : 8685.494us 00:08:56.134 75.00000% : 8896.051us 00:08:56.134 90.00000% : 9106.609us 00:08:56.134 95.00000% : 9369.806us 00:08:56.134 98.00000% : 10054.117us 00:08:56.134 99.00000% : 11896.495us 00:08:56.134 99.50000% : 38742.567us 00:08:56.134 99.90000% : 45269.847us 00:08:56.134 99.99000% : 45690.962us 00:08:56.134 99.99900% : 45690.962us 00:08:56.134 99.99990% : 45690.962us 00:08:56.134 99.99999% : 45690.962us 00:08:56.134 00:08:56.134 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.134 ================================================================================= 00:08:56.134 1.00000% : 8106.461us 00:08:56.134 10.00000% : 8317.018us 00:08:56.134 25.00000% : 8474.937us 00:08:56.134 50.00000% : 8685.494us 00:08:56.134 75.00000% : 8896.051us 00:08:56.134 90.00000% : 9106.609us 00:08:56.134 95.00000% : 9369.806us 00:08:56.134 98.00000% : 9948.839us 00:08:56.134 99.00000% : 12212.331us 00:08:56.134 99.50000% : 36426.435us 00:08:56.134 99.90000% : 42953.716us 00:08:56.134 99.99000% : 43164.273us 00:08:56.134 99.99900% : 43374.831us 00:08:56.134 99.99990% : 43374.831us 00:08:56.134 99.99999% : 43374.831us 00:08:56.134 00:08:56.134 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.134 ================================================================================= 00:08:56.134 1.00000% : 8106.461us 00:08:56.134 10.00000% : 8317.018us 00:08:56.134 25.00000% : 8474.937us 00:08:56.134 50.00000% : 8685.494us 00:08:56.134 75.00000% : 8896.051us 00:08:56.134 90.00000% : 9106.609us 00:08:56.134 95.00000% : 9422.445us 00:08:56.134 98.00000% : 10159.396us 00:08:56.134 99.00000% : 12580.806us 00:08:56.134 99.50000% : 29056.925us 00:08:56.134 99.90000% : 35373.648us 00:08:56.134 99.99000% : 35794.763us 00:08:56.134 99.99900% : 35794.763us 00:08:56.134 99.99990% : 35794.763us 00:08:56.134 99.99999% : 35794.763us 00:08:56.134 00:08:56.134 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:56.134 ============================================================================== 00:08:56.134 Range in us Cumulative IO count 00:08:56.134 7843.264 - 7895.904: 0.0282% ( 4) 00:08:56.134 7895.904 - 7948.543: 0.1619% ( 19) 00:08:56.134 7948.543 - 8001.182: 0.5138% ( 50) 00:08:56.134 8001.182 - 8053.822: 1.4358% ( 131) 00:08:56.134 8053.822 - 8106.461: 3.4840% ( 291) 00:08:56.134 8106.461 - 8159.100: 6.1796% ( 383) 00:08:56.134 8159.100 - 8211.740: 9.6565% ( 494) 00:08:56.134 8211.740 - 8264.379: 13.6332% ( 565) 00:08:56.134 8264.379 - 8317.018: 18.2714% ( 659) 00:08:56.134 8317.018 - 8369.658: 22.7970% ( 643) 00:08:56.134 8369.658 - 8422.297: 27.6534% ( 690) 00:08:56.134 8422.297 - 8474.937: 32.2706% ( 656) 00:08:56.134 8474.937 - 8527.576: 37.2396% ( 706) 00:08:56.134 8527.576 - 8580.215: 42.0327% ( 681) 00:08:56.134 8580.215 - 8632.855: 46.8961% ( 691) 00:08:56.134 8632.855 - 8685.494: 51.6822% ( 680) 00:08:56.134 8685.494 - 8738.133: 56.8201% ( 730) 00:08:56.134 8738.133 - 8790.773: 61.8595% ( 716) 00:08:56.134 8790.773 - 8843.412: 66.8637% ( 711) 00:08:56.134 8843.412 - 8896.051: 71.9524% ( 723) 00:08:56.134 8896.051 - 8948.691: 76.7736% ( 685) 00:08:56.134 8948.691 - 9001.330: 81.3415% ( 649) 00:08:56.134 9001.330 - 9053.969: 85.4519% ( 584) 00:08:56.134 9053.969 - 9106.609: 88.5698% ( 443) 00:08:56.134 9106.609 - 9159.248: 90.9417% ( 337) 00:08:56.134 9159.248 - 9211.888: 92.6591% ( 244) 00:08:56.134 9211.888 - 9264.527: 93.7852% ( 160) 00:08:56.134 9264.527 - 9317.166: 94.5101% ( 103) 00:08:56.134 9317.166 - 9369.806: 95.0943% ( 83) 00:08:56.134 9369.806 - 9422.445: 95.5729% ( 68) 00:08:56.134 9422.445 - 9475.084: 95.9741% ( 57) 00:08:56.134 9475.084 - 9527.724: 96.2838% ( 44) 00:08:56.134 9527.724 - 9580.363: 96.6427% ( 51) 00:08:56.134 9580.363 - 9633.002: 96.9665% ( 46) 00:08:56.134 9633.002 - 9685.642: 97.1988% ( 33) 00:08:56.134 9685.642 - 9738.281: 97.4521% ( 36) 00:08:56.134 9738.281 - 9790.920: 97.6140% ( 23) 00:08:56.134 9790.920 - 9843.560: 97.7970% ( 26) 00:08:56.134 9843.560 - 9896.199: 97.8956% ( 14) 00:08:56.134 9896.199 - 9948.839: 97.9519% ( 8) 00:08:56.134 9948.839 - 10001.478: 98.0293% ( 11) 00:08:56.134 10001.478 - 10054.117: 98.1137% ( 12) 00:08:56.134 10054.117 - 10106.757: 98.1630% ( 7) 00:08:56.134 10106.757 - 10159.396: 98.2334% ( 10) 00:08:56.134 10159.396 - 10212.035: 98.3108% ( 11) 00:08:56.134 10212.035 - 10264.675: 98.3742% ( 9) 00:08:56.134 10264.675 - 10317.314: 98.4305% ( 8) 00:08:56.134 10317.314 - 10369.953: 98.4868% ( 8) 00:08:56.134 10369.953 - 10422.593: 98.5220% ( 5) 00:08:56.134 10422.593 - 10475.232: 98.5783% ( 8) 00:08:56.134 10475.232 - 10527.871: 98.6135% ( 5) 00:08:56.134 10527.871 - 10580.511: 98.6486% ( 5) 00:08:56.134 10580.511 - 10633.150: 98.6909% ( 6) 00:08:56.134 10633.150 - 10685.790: 98.7401% ( 7) 00:08:56.134 10685.790 - 10738.429: 98.7683% ( 4) 00:08:56.134 10738.429 - 10791.068: 98.7965% ( 4) 00:08:56.134 10791.068 - 10843.708: 98.8105% ( 2) 00:08:56.134 10843.708 - 10896.347: 98.8176% ( 1) 00:08:56.134 10896.347 - 10948.986: 98.8387% ( 3) 00:08:56.134 10948.986 - 11001.626: 98.8528% ( 2) 00:08:56.134 11001.626 - 11054.265: 98.8739% ( 3) 00:08:56.134 11054.265 - 11106.904: 98.8880% ( 2) 00:08:56.134 11106.904 - 11159.544: 98.9020% ( 2) 00:08:56.134 11159.544 - 11212.183: 98.9161% ( 2) 00:08:56.134 11212.183 - 11264.822: 98.9372% ( 3) 00:08:56.134 11264.822 - 11317.462: 98.9583% ( 3) 00:08:56.134 11317.462 - 11370.101: 98.9724% ( 2) 00:08:56.134 11370.101 - 11422.741: 98.9865% ( 2) 00:08:56.134 11422.741 - 11475.380: 99.0006% ( 2) 00:08:56.134 11475.380 - 11528.019: 99.0146% ( 2) 00:08:56.134 11528.019 - 11580.659: 99.0428% ( 4) 00:08:56.134 11580.659 - 11633.298: 99.0498% ( 1) 00:08:56.134 11633.298 - 11685.937: 99.0639% ( 2) 00:08:56.134 11685.937 - 11738.577: 99.0780% ( 2) 00:08:56.134 11738.577 - 11791.216: 99.0921% ( 2) 00:08:56.134 11791.216 - 11843.855: 99.0991% ( 1) 00:08:56.134 43164.273 - 43374.831: 99.1413% ( 6) 00:08:56.134 43374.831 - 43585.388: 99.1906% ( 7) 00:08:56.134 43585.388 - 43795.945: 99.2328% ( 6) 00:08:56.134 43795.945 - 44006.503: 99.2821% ( 7) 00:08:56.134 44006.503 - 44217.060: 99.3314% ( 7) 00:08:56.134 44217.060 - 44427.618: 99.3806% ( 7) 00:08:56.134 44427.618 - 44638.175: 99.4299% ( 7) 00:08:56.134 44638.175 - 44848.733: 99.4721% ( 6) 00:08:56.134 44848.733 - 45059.290: 99.5214% ( 7) 00:08:56.134 45059.290 - 45269.847: 99.5495% ( 4) 00:08:56.134 50112.668 - 50323.226: 99.5918% ( 6) 00:08:56.134 50323.226 - 50533.783: 99.6340% ( 6) 00:08:56.134 50533.783 - 50744.341: 99.6833% ( 7) 00:08:56.134 50744.341 - 50954.898: 99.7325% ( 7) 00:08:56.134 50954.898 - 51165.455: 99.7818% ( 7) 00:08:56.134 51165.455 - 51376.013: 99.8311% ( 7) 00:08:56.134 51376.013 - 51586.570: 99.8733% ( 6) 00:08:56.134 51586.570 - 51797.128: 99.9226% ( 7) 00:08:56.134 51797.128 - 52007.685: 99.9648% ( 6) 00:08:56.134 52007.685 - 52218.243: 100.0000% ( 5) 00:08:56.134 00:08:56.134 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:56.134 ============================================================================== 00:08:56.134 Range in us Cumulative IO count 00:08:56.134 7895.904 - 7948.543: 0.0282% ( 4) 00:08:56.134 7948.543 - 8001.182: 0.0493% ( 3) 00:08:56.134 8001.182 - 8053.822: 0.2956% ( 35) 00:08:56.135 8053.822 - 8106.461: 0.9150% ( 88) 00:08:56.135 8106.461 - 8159.100: 2.3438% ( 203) 00:08:56.135 8159.100 - 8211.740: 5.0605% ( 386) 00:08:56.135 8211.740 - 8264.379: 8.6782% ( 514) 00:08:56.135 8264.379 - 8317.018: 13.1616% ( 637) 00:08:56.135 8317.018 - 8369.658: 18.3699% ( 740) 00:08:56.135 8369.658 - 8422.297: 23.9794% ( 797) 00:08:56.135 8422.297 - 8474.937: 29.6945% ( 812) 00:08:56.135 8474.937 - 8527.576: 35.4519% ( 818) 00:08:56.135 8527.576 - 8580.215: 41.2725% ( 827) 00:08:56.135 8580.215 - 8632.855: 47.0650% ( 823) 00:08:56.135 8632.855 - 8685.494: 52.8716% ( 825) 00:08:56.135 8685.494 - 8738.133: 58.7416% ( 834) 00:08:56.135 8738.133 - 8790.773: 64.5622% ( 827) 00:08:56.135 8790.773 - 8843.412: 70.5025% ( 844) 00:08:56.135 8843.412 - 8896.051: 76.1965% ( 809) 00:08:56.135 8896.051 - 8948.691: 81.2922% ( 724) 00:08:56.135 8948.691 - 9001.330: 85.4800% ( 595) 00:08:56.135 9001.330 - 9053.969: 88.6402% ( 449) 00:08:56.135 9053.969 - 9106.609: 90.9840% ( 333) 00:08:56.135 9106.609 - 9159.248: 92.6380% ( 235) 00:08:56.135 9159.248 - 9211.888: 93.5881% ( 135) 00:08:56.135 9211.888 - 9264.527: 94.2497% ( 94) 00:08:56.135 9264.527 - 9317.166: 94.8128% ( 80) 00:08:56.135 9317.166 - 9369.806: 95.2562% ( 63) 00:08:56.135 9369.806 - 9422.445: 95.6363% ( 54) 00:08:56.135 9422.445 - 9475.084: 96.0163% ( 54) 00:08:56.135 9475.084 - 9527.724: 96.3823% ( 52) 00:08:56.135 9527.724 - 9580.363: 96.7483% ( 52) 00:08:56.135 9580.363 - 9633.002: 97.0650% ( 45) 00:08:56.135 9633.002 - 9685.642: 97.3466% ( 40) 00:08:56.135 9685.642 - 9738.281: 97.5014% ( 22) 00:08:56.135 9738.281 - 9790.920: 97.6562% ( 22) 00:08:56.135 9790.920 - 9843.560: 97.7477% ( 13) 00:08:56.135 9843.560 - 9896.199: 97.8533% ( 15) 00:08:56.135 9896.199 - 9948.839: 97.9659% ( 16) 00:08:56.135 9948.839 - 10001.478: 98.0574% ( 13) 00:08:56.135 10001.478 - 10054.117: 98.1630% ( 15) 00:08:56.135 10054.117 - 10106.757: 98.2615% ( 14) 00:08:56.135 10106.757 - 10159.396: 98.3530% ( 13) 00:08:56.135 10159.396 - 10212.035: 98.4516% ( 14) 00:08:56.135 10212.035 - 10264.675: 98.5572% ( 15) 00:08:56.135 10264.675 - 10317.314: 98.6275% ( 10) 00:08:56.135 10317.314 - 10369.953: 98.6486% ( 3) 00:08:56.135 10738.429 - 10791.068: 98.6557% ( 1) 00:08:56.135 10791.068 - 10843.708: 98.6698% ( 2) 00:08:56.135 10843.708 - 10896.347: 98.6979% ( 4) 00:08:56.135 10896.347 - 10948.986: 98.7261% ( 4) 00:08:56.135 10948.986 - 11001.626: 98.7472% ( 3) 00:08:56.135 11001.626 - 11054.265: 98.7613% ( 2) 00:08:56.135 11054.265 - 11106.904: 98.7683% ( 1) 00:08:56.135 11106.904 - 11159.544: 98.7894% ( 3) 00:08:56.135 11159.544 - 11212.183: 98.8105% ( 3) 00:08:56.135 11212.183 - 11264.822: 98.8316% ( 3) 00:08:56.135 11264.822 - 11317.462: 98.8598% ( 4) 00:08:56.135 11317.462 - 11370.101: 98.8739% ( 2) 00:08:56.135 11370.101 - 11422.741: 98.9020% ( 4) 00:08:56.135 11422.741 - 11475.380: 98.9231% ( 3) 00:08:56.135 11475.380 - 11528.019: 98.9372% ( 2) 00:08:56.135 11528.019 - 11580.659: 98.9583% ( 3) 00:08:56.135 11580.659 - 11633.298: 98.9654% ( 1) 00:08:56.135 11633.298 - 11685.937: 98.9794% ( 2) 00:08:56.135 11685.937 - 11738.577: 98.9935% ( 2) 00:08:56.135 11738.577 - 11791.216: 99.0146% ( 3) 00:08:56.135 11791.216 - 11843.855: 99.0358% ( 3) 00:08:56.135 11843.855 - 11896.495: 99.0569% ( 3) 00:08:56.135 11896.495 - 11949.134: 99.0709% ( 2) 00:08:56.135 11949.134 - 12001.773: 99.0921% ( 3) 00:08:56.135 12001.773 - 12054.413: 99.0991% ( 1) 00:08:56.135 41058.699 - 41269.256: 99.1484% ( 7) 00:08:56.135 41269.256 - 41479.814: 99.1976% ( 7) 00:08:56.135 41479.814 - 41690.371: 99.2469% ( 7) 00:08:56.135 41690.371 - 41900.929: 99.2962% ( 7) 00:08:56.135 41900.929 - 42111.486: 99.3454% ( 7) 00:08:56.135 42111.486 - 42322.043: 99.3947% ( 7) 00:08:56.135 42322.043 - 42532.601: 99.4440% ( 7) 00:08:56.135 42532.601 - 42743.158: 99.4932% ( 7) 00:08:56.135 42743.158 - 42953.716: 99.5425% ( 7) 00:08:56.135 42953.716 - 43164.273: 99.5495% ( 1) 00:08:56.135 47796.537 - 48007.094: 99.5777% ( 4) 00:08:56.135 48007.094 - 48217.651: 99.6199% ( 6) 00:08:56.135 48217.651 - 48428.209: 99.6692% ( 7) 00:08:56.135 48428.209 - 48638.766: 99.7185% ( 7) 00:08:56.135 48638.766 - 48849.324: 99.7677% ( 7) 00:08:56.135 48849.324 - 49059.881: 99.8170% ( 7) 00:08:56.135 49059.881 - 49270.439: 99.8592% ( 6) 00:08:56.135 49270.439 - 49480.996: 99.9085% ( 7) 00:08:56.135 49480.996 - 49691.553: 99.9578% ( 7) 00:08:56.135 49691.553 - 49902.111: 100.0000% ( 6) 00:08:56.135 00:08:56.135 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:56.135 ============================================================================== 00:08:56.135 Range in us Cumulative IO count 00:08:56.135 7948.543 - 8001.182: 0.0211% ( 3) 00:08:56.135 8001.182 - 8053.822: 0.2393% ( 31) 00:08:56.135 8053.822 - 8106.461: 0.9079% ( 95) 00:08:56.135 8106.461 - 8159.100: 2.2523% ( 191) 00:08:56.135 8159.100 - 8211.740: 5.3069% ( 434) 00:08:56.135 8211.740 - 8264.379: 9.0583% ( 533) 00:08:56.135 8264.379 - 8317.018: 13.4150% ( 619) 00:08:56.135 8317.018 - 8369.658: 18.4262% ( 712) 00:08:56.135 8369.658 - 8422.297: 24.1906% ( 819) 00:08:56.135 8422.297 - 8474.937: 29.7157% ( 785) 00:08:56.135 8474.937 - 8527.576: 35.4730% ( 818) 00:08:56.135 8527.576 - 8580.215: 41.2936% ( 827) 00:08:56.135 8580.215 - 8632.855: 47.1213% ( 828) 00:08:56.135 8632.855 - 8685.494: 53.0898% ( 848) 00:08:56.135 8685.494 - 8738.133: 58.9034% ( 826) 00:08:56.135 8738.133 - 8790.773: 64.8649% ( 847) 00:08:56.135 8790.773 - 8843.412: 70.7981% ( 843) 00:08:56.135 8843.412 - 8896.051: 76.4006% ( 796) 00:08:56.135 8896.051 - 8948.691: 81.4048% ( 711) 00:08:56.135 8948.691 - 9001.330: 85.5856% ( 594) 00:08:56.135 9001.330 - 9053.969: 88.5980% ( 428) 00:08:56.135 9053.969 - 9106.609: 90.9065% ( 328) 00:08:56.135 9106.609 - 9159.248: 92.4761% ( 223) 00:08:56.135 9159.248 - 9211.888: 93.5177% ( 148) 00:08:56.135 9211.888 - 9264.527: 94.1934% ( 96) 00:08:56.135 9264.527 - 9317.166: 94.7354% ( 77) 00:08:56.135 9317.166 - 9369.806: 95.2351% ( 71) 00:08:56.135 9369.806 - 9422.445: 95.7207% ( 69) 00:08:56.135 9422.445 - 9475.084: 96.1430% ( 60) 00:08:56.135 9475.084 - 9527.724: 96.4949% ( 50) 00:08:56.135 9527.724 - 9580.363: 96.8328% ( 48) 00:08:56.135 9580.363 - 9633.002: 97.1213% ( 41) 00:08:56.135 9633.002 - 9685.642: 97.3606% ( 34) 00:08:56.135 9685.642 - 9738.281: 97.5366% ( 25) 00:08:56.135 9738.281 - 9790.920: 97.6914% ( 22) 00:08:56.135 9790.920 - 9843.560: 97.8252% ( 19) 00:08:56.135 9843.560 - 9896.199: 97.9378% ( 16) 00:08:56.135 9896.199 - 9948.839: 98.0434% ( 15) 00:08:56.135 9948.839 - 10001.478: 98.1419% ( 14) 00:08:56.135 10001.478 - 10054.117: 98.2404% ( 14) 00:08:56.135 10054.117 - 10106.757: 98.3319% ( 13) 00:08:56.135 10106.757 - 10159.396: 98.4093% ( 11) 00:08:56.135 10159.396 - 10212.035: 98.4938% ( 12) 00:08:56.135 10212.035 - 10264.675: 98.5431% ( 7) 00:08:56.135 10264.675 - 10317.314: 98.5853% ( 6) 00:08:56.135 10317.314 - 10369.953: 98.6135% ( 4) 00:08:56.135 10369.953 - 10422.593: 98.6416% ( 4) 00:08:56.135 10422.593 - 10475.232: 98.6486% ( 1) 00:08:56.135 10527.871 - 10580.511: 98.6557% ( 1) 00:08:56.135 10580.511 - 10633.150: 98.6768% ( 3) 00:08:56.135 10633.150 - 10685.790: 98.7190% ( 6) 00:08:56.135 10685.790 - 10738.429: 98.7472% ( 4) 00:08:56.135 10791.068 - 10843.708: 98.7542% ( 1) 00:08:56.135 10843.708 - 10896.347: 98.7753% ( 3) 00:08:56.135 10896.347 - 10948.986: 98.7965% ( 3) 00:08:56.135 10948.986 - 11001.626: 98.8176% ( 3) 00:08:56.135 11001.626 - 11054.265: 98.8387% ( 3) 00:08:56.135 11054.265 - 11106.904: 98.8457% ( 1) 00:08:56.135 11106.904 - 11159.544: 98.8598% ( 2) 00:08:56.135 11159.544 - 11212.183: 98.8809% ( 3) 00:08:56.135 11212.183 - 11264.822: 98.8950% ( 2) 00:08:56.135 11264.822 - 11317.462: 98.9091% ( 2) 00:08:56.135 11317.462 - 11370.101: 98.9302% ( 3) 00:08:56.135 11370.101 - 11422.741: 98.9443% ( 2) 00:08:56.135 11422.741 - 11475.380: 98.9654% ( 3) 00:08:56.135 11475.380 - 11528.019: 98.9865% ( 3) 00:08:56.135 11528.019 - 11580.659: 99.0076% ( 3) 00:08:56.135 11580.659 - 11633.298: 99.0287% ( 3) 00:08:56.135 11633.298 - 11685.937: 99.0428% ( 2) 00:08:56.135 11685.937 - 11738.577: 99.0569% ( 2) 00:08:56.135 11738.577 - 11791.216: 99.0780% ( 3) 00:08:56.135 11791.216 - 11843.855: 99.0921% ( 2) 00:08:56.135 11843.855 - 11896.495: 99.0991% ( 1) 00:08:56.135 39163.682 - 39374.239: 99.1273% ( 4) 00:08:56.135 39374.239 - 39584.797: 99.1695% ( 6) 00:08:56.135 39584.797 - 39795.354: 99.2188% ( 7) 00:08:56.135 39795.354 - 40005.912: 99.2610% ( 6) 00:08:56.135 40005.912 - 40216.469: 99.3102% ( 7) 00:08:56.135 40216.469 - 40427.027: 99.3525% ( 6) 00:08:56.135 40427.027 - 40637.584: 99.4088% ( 8) 00:08:56.135 40637.584 - 40848.141: 99.4581% ( 7) 00:08:56.135 40848.141 - 41058.699: 99.5073% ( 7) 00:08:56.135 41058.699 - 41269.256: 99.5495% ( 6) 00:08:56.135 45901.520 - 46112.077: 99.5707% ( 3) 00:08:56.135 46112.077 - 46322.635: 99.6129% ( 6) 00:08:56.135 46322.635 - 46533.192: 99.6622% ( 7) 00:08:56.135 46533.192 - 46743.749: 99.7114% ( 7) 00:08:56.135 46743.749 - 46954.307: 99.7607% ( 7) 00:08:56.135 46954.307 - 47164.864: 99.8100% ( 7) 00:08:56.135 47164.864 - 47375.422: 99.8592% ( 7) 00:08:56.135 47375.422 - 47585.979: 99.9085% ( 7) 00:08:56.135 47585.979 - 47796.537: 99.9578% ( 7) 00:08:56.135 47796.537 - 48007.094: 100.0000% ( 6) 00:08:56.135 00:08:56.135 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:56.135 ============================================================================== 00:08:56.136 Range in us Cumulative IO count 00:08:56.136 7948.543 - 8001.182: 0.0352% ( 5) 00:08:56.136 8001.182 - 8053.822: 0.3238% ( 41) 00:08:56.136 8053.822 - 8106.461: 1.0206% ( 99) 00:08:56.136 8106.461 - 8159.100: 2.7731% ( 249) 00:08:56.136 8159.100 - 8211.740: 5.3702% ( 369) 00:08:56.136 8211.740 - 8264.379: 9.0231% ( 519) 00:08:56.136 8264.379 - 8317.018: 13.3727% ( 618) 00:08:56.136 8317.018 - 8369.658: 18.7500% ( 764) 00:08:56.136 8369.658 - 8422.297: 24.2328% ( 779) 00:08:56.136 8422.297 - 8474.937: 29.9198% ( 808) 00:08:56.136 8474.937 - 8527.576: 35.8812% ( 847) 00:08:56.136 8527.576 - 8580.215: 41.6033% ( 813) 00:08:56.136 8580.215 - 8632.855: 47.3958% ( 823) 00:08:56.136 8632.855 - 8685.494: 53.1883% ( 823) 00:08:56.136 8685.494 - 8738.133: 59.0442% ( 832) 00:08:56.136 8738.133 - 8790.773: 64.8578% ( 826) 00:08:56.136 8790.773 - 8843.412: 70.7418% ( 836) 00:08:56.136 8843.412 - 8896.051: 76.2458% ( 782) 00:08:56.136 8896.051 - 8948.691: 81.1303% ( 694) 00:08:56.136 8948.691 - 9001.330: 85.2970% ( 592) 00:08:56.136 9001.330 - 9053.969: 88.4854% ( 453) 00:08:56.136 9053.969 - 9106.609: 90.8573% ( 337) 00:08:56.136 9106.609 - 9159.248: 92.4198% ( 222) 00:08:56.136 9159.248 - 9211.888: 93.4896% ( 152) 00:08:56.136 9211.888 - 9264.527: 94.2286% ( 105) 00:08:56.136 9264.527 - 9317.166: 94.7706% ( 77) 00:08:56.136 9317.166 - 9369.806: 95.2421% ( 67) 00:08:56.136 9369.806 - 9422.445: 95.6292% ( 55) 00:08:56.136 9422.445 - 9475.084: 96.0586% ( 61) 00:08:56.136 9475.084 - 9527.724: 96.4738% ( 59) 00:08:56.136 9527.724 - 9580.363: 96.8328% ( 51) 00:08:56.136 9580.363 - 9633.002: 97.0650% ( 33) 00:08:56.136 9633.002 - 9685.642: 97.2832% ( 31) 00:08:56.136 9685.642 - 9738.281: 97.4521% ( 24) 00:08:56.136 9738.281 - 9790.920: 97.5859% ( 19) 00:08:56.136 9790.920 - 9843.560: 97.6774% ( 13) 00:08:56.136 9843.560 - 9896.199: 97.7618% ( 12) 00:08:56.136 9896.199 - 9948.839: 97.8463% ( 12) 00:08:56.136 9948.839 - 10001.478: 97.9237% ( 11) 00:08:56.136 10001.478 - 10054.117: 98.0222% ( 14) 00:08:56.136 10054.117 - 10106.757: 98.1278% ( 15) 00:08:56.136 10106.757 - 10159.396: 98.2193% ( 13) 00:08:56.136 10159.396 - 10212.035: 98.3249% ( 15) 00:08:56.136 10212.035 - 10264.675: 98.4164% ( 13) 00:08:56.136 10264.675 - 10317.314: 98.4657% ( 7) 00:08:56.136 10317.314 - 10369.953: 98.5079% ( 6) 00:08:56.136 10369.953 - 10422.593: 98.5501% ( 6) 00:08:56.136 10422.593 - 10475.232: 98.5642% ( 2) 00:08:56.136 10475.232 - 10527.871: 98.5712% ( 1) 00:08:56.136 10527.871 - 10580.511: 98.5923% ( 3) 00:08:56.136 10580.511 - 10633.150: 98.6064% ( 2) 00:08:56.136 10633.150 - 10685.790: 98.6275% ( 3) 00:08:56.136 10685.790 - 10738.429: 98.6416% ( 2) 00:08:56.136 10738.429 - 10791.068: 98.6486% ( 1) 00:08:56.136 10791.068 - 10843.708: 98.6557% ( 1) 00:08:56.136 10843.708 - 10896.347: 98.6698% ( 2) 00:08:56.136 10896.347 - 10948.986: 98.7050% ( 5) 00:08:56.136 10948.986 - 11001.626: 98.7261% ( 3) 00:08:56.136 11001.626 - 11054.265: 98.7331% ( 1) 00:08:56.136 11054.265 - 11106.904: 98.7401% ( 1) 00:08:56.136 11106.904 - 11159.544: 98.7613% ( 3) 00:08:56.136 11159.544 - 11212.183: 98.7824% ( 3) 00:08:56.136 11212.183 - 11264.822: 98.7965% ( 2) 00:08:56.136 11264.822 - 11317.462: 98.8176% ( 3) 00:08:56.136 11317.462 - 11370.101: 98.8457% ( 4) 00:08:56.136 11370.101 - 11422.741: 98.8598% ( 2) 00:08:56.136 11422.741 - 11475.380: 98.8880% ( 4) 00:08:56.136 11475.380 - 11528.019: 98.9020% ( 2) 00:08:56.136 11528.019 - 11580.659: 98.9302% ( 4) 00:08:56.136 11580.659 - 11633.298: 98.9372% ( 1) 00:08:56.136 11633.298 - 11685.937: 98.9583% ( 3) 00:08:56.136 11685.937 - 11738.577: 98.9654% ( 1) 00:08:56.136 11738.577 - 11791.216: 98.9794% ( 2) 00:08:56.136 11791.216 - 11843.855: 98.9935% ( 2) 00:08:56.136 11843.855 - 11896.495: 99.0146% ( 3) 00:08:56.136 11896.495 - 11949.134: 99.0287% ( 2) 00:08:56.136 11949.134 - 12001.773: 99.0428% ( 2) 00:08:56.136 12001.773 - 12054.413: 99.0639% ( 3) 00:08:56.136 12054.413 - 12107.052: 99.0780% ( 2) 00:08:56.136 12107.052 - 12159.692: 99.0991% ( 3) 00:08:56.136 36847.550 - 37058.108: 99.1132% ( 2) 00:08:56.136 37058.108 - 37268.665: 99.1624% ( 7) 00:08:56.136 37268.665 - 37479.222: 99.2117% ( 7) 00:08:56.136 37479.222 - 37689.780: 99.2680% ( 8) 00:08:56.136 37689.780 - 37900.337: 99.3102% ( 6) 00:08:56.136 37900.337 - 38110.895: 99.3595% ( 7) 00:08:56.136 38110.895 - 38321.452: 99.4088% ( 7) 00:08:56.136 38321.452 - 38532.010: 99.4581% ( 7) 00:08:56.136 38532.010 - 38742.567: 99.5073% ( 7) 00:08:56.136 38742.567 - 38953.124: 99.5495% ( 6) 00:08:56.136 43585.388 - 43795.945: 99.5918% ( 6) 00:08:56.136 43795.945 - 44006.503: 99.6410% ( 7) 00:08:56.136 44006.503 - 44217.060: 99.6833% ( 6) 00:08:56.136 44217.060 - 44427.618: 99.7325% ( 7) 00:08:56.136 44427.618 - 44638.175: 99.7818% ( 7) 00:08:56.136 44638.175 - 44848.733: 99.8311% ( 7) 00:08:56.136 44848.733 - 45059.290: 99.8803% ( 7) 00:08:56.136 45059.290 - 45269.847: 99.9296% ( 7) 00:08:56.136 45269.847 - 45480.405: 99.9718% ( 6) 00:08:56.136 45480.405 - 45690.962: 100.0000% ( 4) 00:08:56.136 00:08:56.136 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:56.136 ============================================================================== 00:08:56.136 Range in us Cumulative IO count 00:08:56.136 7948.543 - 8001.182: 0.0352% ( 5) 00:08:56.136 8001.182 - 8053.822: 0.3378% ( 43) 00:08:56.136 8053.822 - 8106.461: 1.1050% ( 109) 00:08:56.136 8106.461 - 8159.100: 2.8857% ( 253) 00:08:56.136 8159.100 - 8211.740: 5.5321% ( 376) 00:08:56.136 8211.740 - 8264.379: 9.0372% ( 498) 00:08:56.136 8264.379 - 8317.018: 13.4220% ( 623) 00:08:56.136 8317.018 - 8369.658: 18.6163% ( 738) 00:08:56.136 8369.658 - 8422.297: 24.1906% ( 792) 00:08:56.136 8422.297 - 8474.937: 29.8212% ( 800) 00:08:56.136 8474.937 - 8527.576: 35.5293% ( 811) 00:08:56.136 8527.576 - 8580.215: 41.2373% ( 811) 00:08:56.136 8580.215 - 8632.855: 47.0580% ( 827) 00:08:56.136 8632.855 - 8685.494: 52.8364% ( 821) 00:08:56.136 8685.494 - 8738.133: 58.5867% ( 817) 00:08:56.136 8738.133 - 8790.773: 64.5481% ( 847) 00:08:56.136 8790.773 - 8843.412: 70.2703% ( 813) 00:08:56.136 8843.412 - 8896.051: 75.8868% ( 798) 00:08:56.136 8896.051 - 8948.691: 80.9825% ( 724) 00:08:56.136 8948.691 - 9001.330: 85.0507% ( 578) 00:08:56.136 9001.330 - 9053.969: 88.3798% ( 473) 00:08:56.136 9053.969 - 9106.609: 90.8784% ( 355) 00:08:56.136 9106.609 - 9159.248: 92.4831% ( 228) 00:08:56.136 9159.248 - 9211.888: 93.5529% ( 152) 00:08:56.136 9211.888 - 9264.527: 94.2497% ( 99) 00:08:56.136 9264.527 - 9317.166: 94.8269% ( 82) 00:08:56.136 9317.166 - 9369.806: 95.3195% ( 70) 00:08:56.136 9369.806 - 9422.445: 95.7559% ( 62) 00:08:56.136 9422.445 - 9475.084: 96.1571% ( 57) 00:08:56.136 9475.084 - 9527.724: 96.6216% ( 66) 00:08:56.136 9527.724 - 9580.363: 96.9595% ( 48) 00:08:56.136 9580.363 - 9633.002: 97.2058% ( 35) 00:08:56.136 9633.002 - 9685.642: 97.4169% ( 30) 00:08:56.136 9685.642 - 9738.281: 97.6070% ( 27) 00:08:56.136 9738.281 - 9790.920: 97.7477% ( 20) 00:08:56.136 9790.920 - 9843.560: 97.8533% ( 15) 00:08:56.136 9843.560 - 9896.199: 97.9519% ( 14) 00:08:56.136 9896.199 - 9948.839: 98.0293% ( 11) 00:08:56.136 9948.839 - 10001.478: 98.1137% ( 12) 00:08:56.136 10001.478 - 10054.117: 98.1912% ( 11) 00:08:56.136 10054.117 - 10106.757: 98.2475% ( 8) 00:08:56.136 10106.757 - 10159.396: 98.2897% ( 6) 00:08:56.136 10159.396 - 10212.035: 98.3390% ( 7) 00:08:56.136 10212.035 - 10264.675: 98.3742% ( 5) 00:08:56.136 10264.675 - 10317.314: 98.3882% ( 2) 00:08:56.136 10317.314 - 10369.953: 98.4093% ( 3) 00:08:56.136 10369.953 - 10422.593: 98.4234% ( 2) 00:08:56.136 10422.593 - 10475.232: 98.4375% ( 2) 00:08:56.136 10475.232 - 10527.871: 98.4586% ( 3) 00:08:56.136 10527.871 - 10580.511: 98.4727% ( 2) 00:08:56.136 10580.511 - 10633.150: 98.4938% ( 3) 00:08:56.136 10633.150 - 10685.790: 98.5149% ( 3) 00:08:56.136 10685.790 - 10738.429: 98.5360% ( 3) 00:08:56.136 10738.429 - 10791.068: 98.5501% ( 2) 00:08:56.136 10791.068 - 10843.708: 98.5642% ( 2) 00:08:56.136 10843.708 - 10896.347: 98.5783% ( 2) 00:08:56.136 10896.347 - 10948.986: 98.5923% ( 2) 00:08:56.136 10948.986 - 11001.626: 98.6064% ( 2) 00:08:56.136 11001.626 - 11054.265: 98.6275% ( 3) 00:08:56.136 11054.265 - 11106.904: 98.6416% ( 2) 00:08:56.136 11106.904 - 11159.544: 98.6627% ( 3) 00:08:56.136 11159.544 - 11212.183: 98.6768% ( 2) 00:08:56.136 11212.183 - 11264.822: 98.6979% ( 3) 00:08:56.136 11264.822 - 11317.462: 98.7120% ( 2) 00:08:56.136 11317.462 - 11370.101: 98.7401% ( 4) 00:08:56.136 11370.101 - 11422.741: 98.7472% ( 1) 00:08:56.136 11422.741 - 11475.380: 98.7613% ( 2) 00:08:56.136 11475.380 - 11528.019: 98.7753% ( 2) 00:08:56.136 11528.019 - 11580.659: 98.7965% ( 3) 00:08:56.136 11580.659 - 11633.298: 98.8176% ( 3) 00:08:56.136 11633.298 - 11685.937: 98.8316% ( 2) 00:08:56.136 11685.937 - 11738.577: 98.8457% ( 2) 00:08:56.136 11738.577 - 11791.216: 98.8739% ( 4) 00:08:56.136 11791.216 - 11843.855: 98.8880% ( 2) 00:08:56.136 11843.855 - 11896.495: 98.9091% ( 3) 00:08:56.136 11896.495 - 11949.134: 98.9231% ( 2) 00:08:56.136 11949.134 - 12001.773: 98.9372% ( 2) 00:08:56.136 12001.773 - 12054.413: 98.9583% ( 3) 00:08:56.136 12054.413 - 12107.052: 98.9654% ( 1) 00:08:56.136 12107.052 - 12159.692: 98.9865% ( 3) 00:08:56.136 12159.692 - 12212.331: 99.0006% ( 2) 00:08:56.136 12212.331 - 12264.970: 99.0146% ( 2) 00:08:56.136 12264.970 - 12317.610: 99.0358% ( 3) 00:08:56.136 12317.610 - 12370.249: 99.0498% ( 2) 00:08:56.136 12370.249 - 12422.888: 99.0639% ( 2) 00:08:56.137 12422.888 - 12475.528: 99.0850% ( 3) 00:08:56.137 12475.528 - 12528.167: 99.0991% ( 2) 00:08:56.137 34531.418 - 34741.976: 99.1413% ( 6) 00:08:56.137 34741.976 - 34952.533: 99.1906% ( 7) 00:08:56.137 34952.533 - 35163.091: 99.2399% ( 7) 00:08:56.137 35163.091 - 35373.648: 99.2891% ( 7) 00:08:56.137 35373.648 - 35584.206: 99.3384% ( 7) 00:08:56.137 35584.206 - 35794.763: 99.3806% ( 6) 00:08:56.137 35794.763 - 36005.320: 99.4229% ( 6) 00:08:56.137 36005.320 - 36215.878: 99.4792% ( 8) 00:08:56.137 36215.878 - 36426.435: 99.5214% ( 6) 00:08:56.137 36426.435 - 36636.993: 99.5495% ( 4) 00:08:56.137 41058.699 - 41269.256: 99.5777% ( 4) 00:08:56.137 41269.256 - 41479.814: 99.6129% ( 5) 00:08:56.137 41479.814 - 41690.371: 99.6622% ( 7) 00:08:56.137 41690.371 - 41900.929: 99.6974% ( 5) 00:08:56.137 41900.929 - 42111.486: 99.7466% ( 7) 00:08:56.137 42111.486 - 42322.043: 99.7959% ( 7) 00:08:56.137 42322.043 - 42532.601: 99.8452% ( 7) 00:08:56.137 42532.601 - 42743.158: 99.8944% ( 7) 00:08:56.137 42743.158 - 42953.716: 99.9437% ( 7) 00:08:56.137 42953.716 - 43164.273: 99.9930% ( 7) 00:08:56.137 43164.273 - 43374.831: 100.0000% ( 1) 00:08:56.137 00:08:56.137 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:56.137 ============================================================================== 00:08:56.137 Range in us Cumulative IO count 00:08:56.137 7895.904 - 7948.543: 0.0070% ( 1) 00:08:56.137 7948.543 - 8001.182: 0.0981% ( 13) 00:08:56.137 8001.182 - 8053.822: 0.3994% ( 43) 00:08:56.137 8053.822 - 8106.461: 1.1841% ( 112) 00:08:56.137 8106.461 - 8159.100: 2.5715% ( 198) 00:08:56.137 8159.100 - 8211.740: 5.4793% ( 415) 00:08:56.137 8211.740 - 8264.379: 8.9896% ( 501) 00:08:56.137 8264.379 - 8317.018: 13.2988% ( 615) 00:08:56.137 8317.018 - 8369.658: 18.7220% ( 774) 00:08:56.137 8369.658 - 8422.297: 24.2223% ( 785) 00:08:56.137 8422.297 - 8474.937: 29.8627% ( 805) 00:08:56.137 8474.937 - 8527.576: 35.5872% ( 817) 00:08:56.137 8527.576 - 8580.215: 41.2276% ( 805) 00:08:56.137 8580.215 - 8632.855: 46.9521% ( 817) 00:08:56.137 8632.855 - 8685.494: 52.5995% ( 806) 00:08:56.137 8685.494 - 8738.133: 58.4291% ( 832) 00:08:56.137 8738.133 - 8790.773: 64.2727% ( 834) 00:08:56.137 8790.773 - 8843.412: 69.9972% ( 817) 00:08:56.137 8843.412 - 8896.051: 75.5465% ( 792) 00:08:56.137 8896.051 - 8948.691: 80.6684% ( 731) 00:08:56.137 8948.691 - 9001.330: 84.8795% ( 601) 00:08:56.137 9001.330 - 9053.969: 88.1236% ( 463) 00:08:56.137 9053.969 - 9106.609: 90.6530% ( 361) 00:08:56.137 9106.609 - 9159.248: 92.2996% ( 235) 00:08:56.137 9159.248 - 9211.888: 93.2525% ( 136) 00:08:56.137 9211.888 - 9264.527: 93.9252% ( 96) 00:08:56.137 9264.527 - 9317.166: 94.4226% ( 71) 00:08:56.137 9317.166 - 9369.806: 94.9552% ( 76) 00:08:56.137 9369.806 - 9422.445: 95.4036% ( 64) 00:08:56.137 9422.445 - 9475.084: 95.8170% ( 59) 00:08:56.137 9475.084 - 9527.724: 96.2444% ( 61) 00:08:56.137 9527.724 - 9580.363: 96.5667% ( 46) 00:08:56.137 9580.363 - 9633.002: 96.8189% ( 36) 00:08:56.137 9633.002 - 9685.642: 97.0432% ( 32) 00:08:56.137 9685.642 - 9738.281: 97.2464% ( 29) 00:08:56.137 9738.281 - 9790.920: 97.3725% ( 18) 00:08:56.137 9790.920 - 9843.560: 97.4776% ( 15) 00:08:56.137 9843.560 - 9896.199: 97.5617% ( 12) 00:08:56.137 9896.199 - 9948.839: 97.6457% ( 12) 00:08:56.137 9948.839 - 10001.478: 97.7298% ( 12) 00:08:56.137 10001.478 - 10054.117: 97.8139% ( 12) 00:08:56.137 10054.117 - 10106.757: 97.9190% ( 15) 00:08:56.137 10106.757 - 10159.396: 98.0101% ( 13) 00:08:56.137 10159.396 - 10212.035: 98.0942% ( 12) 00:08:56.137 10212.035 - 10264.675: 98.1572% ( 9) 00:08:56.137 10264.675 - 10317.314: 98.2063% ( 7) 00:08:56.137 10317.314 - 10369.953: 98.2483% ( 6) 00:08:56.137 10369.953 - 10422.593: 98.3044% ( 8) 00:08:56.137 10422.593 - 10475.232: 98.3324% ( 4) 00:08:56.137 10475.232 - 10527.871: 98.3534% ( 3) 00:08:56.137 10527.871 - 10580.511: 98.3744% ( 3) 00:08:56.137 10580.511 - 10633.150: 98.3955% ( 3) 00:08:56.137 10633.150 - 10685.790: 98.4165% ( 3) 00:08:56.137 10685.790 - 10738.429: 98.4235% ( 1) 00:08:56.137 10738.429 - 10791.068: 98.4375% ( 2) 00:08:56.137 10791.068 - 10843.708: 98.4585% ( 3) 00:08:56.137 10843.708 - 10896.347: 98.4725% ( 2) 00:08:56.137 10896.347 - 10948.986: 98.4936% ( 3) 00:08:56.137 10948.986 - 11001.626: 98.5076% ( 2) 00:08:56.137 11001.626 - 11054.265: 98.5286% ( 3) 00:08:56.137 11054.265 - 11106.904: 98.5426% ( 2) 00:08:56.137 11106.904 - 11159.544: 98.5636% ( 3) 00:08:56.137 11159.544 - 11212.183: 98.5846% ( 3) 00:08:56.137 11212.183 - 11264.822: 98.5987% ( 2) 00:08:56.137 11264.822 - 11317.462: 98.6127% ( 2) 00:08:56.137 11317.462 - 11370.101: 98.6337% ( 3) 00:08:56.137 11370.101 - 11422.741: 98.6547% ( 3) 00:08:56.137 11422.741 - 11475.380: 98.6617% ( 1) 00:08:56.137 11475.380 - 11528.019: 98.6827% ( 3) 00:08:56.137 11528.019 - 11580.659: 98.6967% ( 2) 00:08:56.137 11580.659 - 11633.298: 98.7038% ( 1) 00:08:56.137 11633.298 - 11685.937: 98.7248% ( 3) 00:08:56.137 11685.937 - 11738.577: 98.7388% ( 2) 00:08:56.137 11738.577 - 11791.216: 98.7598% ( 3) 00:08:56.137 11791.216 - 11843.855: 98.7738% ( 2) 00:08:56.137 11843.855 - 11896.495: 98.8018% ( 4) 00:08:56.137 11896.495 - 11949.134: 98.8159% ( 2) 00:08:56.137 11949.134 - 12001.773: 98.8299% ( 2) 00:08:56.137 12001.773 - 12054.413: 98.8439% ( 2) 00:08:56.137 12054.413 - 12107.052: 98.8579% ( 2) 00:08:56.137 12107.052 - 12159.692: 98.8789% ( 3) 00:08:56.137 12159.692 - 12212.331: 98.8929% ( 2) 00:08:56.137 12212.331 - 12264.970: 98.9140% ( 3) 00:08:56.137 12264.970 - 12317.610: 98.9280% ( 2) 00:08:56.137 12317.610 - 12370.249: 98.9490% ( 3) 00:08:56.137 12370.249 - 12422.888: 98.9630% ( 2) 00:08:56.137 12422.888 - 12475.528: 98.9840% ( 3) 00:08:56.137 12475.528 - 12528.167: 98.9980% ( 2) 00:08:56.137 12528.167 - 12580.806: 99.0191% ( 3) 00:08:56.137 12580.806 - 12633.446: 99.0331% ( 2) 00:08:56.137 12633.446 - 12686.085: 99.0541% ( 3) 00:08:56.137 12686.085 - 12738.724: 99.0681% ( 2) 00:08:56.137 12738.724 - 12791.364: 99.0891% ( 3) 00:08:56.137 12791.364 - 12844.003: 99.1031% ( 2) 00:08:56.137 26951.351 - 27161.908: 99.1101% ( 1) 00:08:56.137 27161.908 - 27372.466: 99.1522% ( 6) 00:08:56.137 27372.466 - 27583.023: 99.2012% ( 7) 00:08:56.137 27583.023 - 27793.581: 99.2503% ( 7) 00:08:56.137 27793.581 - 28004.138: 99.2993% ( 7) 00:08:56.137 28004.138 - 28214.696: 99.3484% ( 7) 00:08:56.137 28214.696 - 28425.253: 99.4044% ( 8) 00:08:56.137 28425.253 - 28635.810: 99.4465% ( 6) 00:08:56.137 28635.810 - 28846.368: 99.4955% ( 7) 00:08:56.137 28846.368 - 29056.925: 99.5446% ( 7) 00:08:56.137 29056.925 - 29267.483: 99.5516% ( 1) 00:08:56.137 33689.189 - 33899.746: 99.5586% ( 1) 00:08:56.137 33899.746 - 34110.304: 99.6076% ( 7) 00:08:56.137 34110.304 - 34320.861: 99.6637% ( 8) 00:08:56.137 34320.861 - 34531.418: 99.7127% ( 7) 00:08:56.137 34531.418 - 34741.976: 99.7548% ( 6) 00:08:56.137 34741.976 - 34952.533: 99.8038% ( 7) 00:08:56.137 34952.533 - 35163.091: 99.8529% ( 7) 00:08:56.137 35163.091 - 35373.648: 99.9089% ( 8) 00:08:56.137 35373.648 - 35584.206: 99.9580% ( 7) 00:08:56.137 35584.206 - 35794.763: 100.0000% ( 6) 00:08:56.137 00:08:56.137 10:52:42 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:57.517 Initializing NVMe Controllers 00:08:57.517 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.517 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.517 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.517 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.517 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:57.517 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:57.517 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:57.517 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:57.517 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:57.517 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:57.517 Initialization complete. Launching workers. 00:08:57.517 ======================================================== 00:08:57.517 Latency(us) 00:08:57.517 Device Information : IOPS MiB/s Average min max 00:08:57.517 PCIE (0000:00:10.0) NSID 1 from core 0: 13704.52 160.60 9363.11 6671.35 42367.88 00:08:57.517 PCIE (0000:00:11.0) NSID 1 from core 0: 13704.52 160.60 9348.86 6894.96 40488.10 00:08:57.517 PCIE (0000:00:13.0) NSID 1 from core 0: 13704.52 160.60 9335.20 6823.41 39162.98 00:08:57.517 PCIE (0000:00:12.0) NSID 1 from core 0: 13704.52 160.60 9320.87 6986.25 37117.61 00:08:57.517 PCIE (0000:00:12.0) NSID 2 from core 0: 13704.52 160.60 9306.02 6742.63 35405.37 00:08:57.517 PCIE (0000:00:12.0) NSID 3 from core 0: 13768.27 161.35 9247.97 6935.65 28046.10 00:08:57.517 ======================================================== 00:08:57.517 Total : 82290.89 964.35 9320.28 6671.35 42367.88 00:08:57.517 00:08:57.517 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:57.517 ================================================================================= 00:08:57.517 1.00000% : 7211.592us 00:08:57.517 10.00000% : 7948.543us 00:08:57.517 25.00000% : 8527.576us 00:08:57.517 50.00000% : 9159.248us 00:08:57.517 75.00000% : 9580.363us 00:08:57.517 90.00000% : 9948.839us 00:08:57.517 95.00000% : 10369.953us 00:08:57.517 98.00000% : 13791.512us 00:08:57.517 99.00000% : 18213.218us 00:08:57.517 99.50000% : 33899.746us 00:08:57.517 99.90000% : 42111.486us 00:08:57.517 99.99000% : 42322.043us 00:08:57.517 99.99900% : 42532.601us 00:08:57.517 99.99990% : 42532.601us 00:08:57.517 99.99999% : 42532.601us 00:08:57.517 00:08:57.517 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:57.517 ================================================================================= 00:08:57.517 1.00000% : 7211.592us 00:08:57.517 10.00000% : 7948.543us 00:08:57.517 25.00000% : 8527.576us 00:08:57.517 50.00000% : 9159.248us 00:08:57.517 75.00000% : 9580.363us 00:08:57.517 90.00000% : 9896.199us 00:08:57.517 95.00000% : 10317.314us 00:08:57.517 98.00000% : 13159.839us 00:08:57.517 99.00000% : 18318.496us 00:08:57.517 99.50000% : 32215.287us 00:08:57.517 99.90000% : 40216.469us 00:08:57.517 99.99000% : 40637.584us 00:08:57.517 99.99900% : 40637.584us 00:08:57.517 99.99990% : 40637.584us 00:08:57.517 99.99999% : 40637.584us 00:08:57.517 00:08:57.517 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:57.517 ================================================================================= 00:08:57.517 1.00000% : 7264.231us 00:08:57.517 10.00000% : 7948.543us 00:08:57.517 25.00000% : 8527.576us 00:08:57.517 50.00000% : 9159.248us 00:08:57.517 75.00000% : 9580.363us 00:08:57.517 90.00000% : 9896.199us 00:08:57.517 95.00000% : 10317.314us 00:08:57.517 98.00000% : 14212.627us 00:08:57.517 99.00000% : 19055.447us 00:08:57.517 99.50000% : 31583.614us 00:08:57.517 99.90000% : 38953.124us 00:08:57.517 99.99000% : 39163.682us 00:08:57.517 99.99900% : 39163.682us 00:08:57.517 99.99990% : 39163.682us 00:08:57.517 99.99999% : 39163.682us 00:08:57.517 00:08:57.517 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:57.517 ================================================================================= 00:08:57.517 1.00000% : 7264.231us 00:08:57.517 10.00000% : 7895.904us 00:08:57.517 25.00000% : 8527.576us 00:08:57.517 50.00000% : 9211.888us 00:08:57.517 75.00000% : 9527.724us 00:08:57.517 90.00000% : 9843.560us 00:08:57.517 95.00000% : 10264.675us 00:08:57.517 98.00000% : 15160.135us 00:08:57.517 99.00000% : 19476.562us 00:08:57.517 99.50000% : 29899.155us 00:08:57.517 99.90000% : 36847.550us 00:08:57.517 99.99000% : 37268.665us 00:08:57.517 99.99900% : 37268.665us 00:08:57.517 99.99990% : 37268.665us 00:08:57.517 99.99999% : 37268.665us 00:08:57.517 00:08:57.517 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:57.517 ================================================================================= 00:08:57.517 1.00000% : 7264.231us 00:08:57.517 10.00000% : 7948.543us 00:08:57.517 25.00000% : 8527.576us 00:08:57.517 50.00000% : 9159.248us 00:08:57.517 75.00000% : 9580.363us 00:08:57.517 90.00000% : 9843.560us 00:08:57.517 95.00000% : 10159.396us 00:08:57.517 98.00000% : 14949.578us 00:08:57.517 99.00000% : 19897.677us 00:08:57.517 99.50000% : 28425.253us 00:08:57.517 99.90000% : 35163.091us 00:08:57.517 99.99000% : 35584.206us 00:08:57.517 99.99900% : 35584.206us 00:08:57.517 99.99990% : 35584.206us 00:08:57.517 99.99999% : 35584.206us 00:08:57.517 00:08:57.517 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:57.517 ================================================================================= 00:08:57.517 1.00000% : 7211.592us 00:08:57.517 10.00000% : 7895.904us 00:08:57.517 25.00000% : 8527.576us 00:08:57.517 50.00000% : 9159.248us 00:08:57.517 75.00000% : 9580.363us 00:08:57.517 90.00000% : 9896.199us 00:08:57.517 95.00000% : 10527.871us 00:08:57.517 98.00000% : 14107.348us 00:08:57.517 99.00000% : 18529.054us 00:08:57.517 99.50000% : 20108.235us 00:08:57.517 99.90000% : 27793.581us 00:08:57.517 99.99000% : 28214.696us 00:08:57.517 99.99900% : 28214.696us 00:08:57.517 99.99990% : 28214.696us 00:08:57.517 99.99999% : 28214.696us 00:08:57.517 00:08:57.517 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:57.517 ============================================================================== 00:08:57.517 Range in us Cumulative IO count 00:08:57.517 6658.879 - 6685.198: 0.0073% ( 1) 00:08:57.517 6685.198 - 6711.518: 0.0145% ( 1) 00:08:57.517 6737.838 - 6790.477: 0.0799% ( 9) 00:08:57.517 6790.477 - 6843.116: 0.0872% ( 1) 00:08:57.517 6843.116 - 6895.756: 0.1090% ( 3) 00:08:57.517 6895.756 - 6948.395: 0.1599% ( 7) 00:08:57.517 6948.395 - 7001.035: 0.1962% ( 5) 00:08:57.517 7001.035 - 7053.674: 0.3125% ( 16) 00:08:57.517 7053.674 - 7106.313: 0.5669% ( 35) 00:08:57.517 7106.313 - 7158.953: 0.9520% ( 53) 00:08:57.517 7158.953 - 7211.592: 1.3227% ( 51) 00:08:57.517 7211.592 - 7264.231: 1.6570% ( 46) 00:08:57.517 7264.231 - 7316.871: 1.9622% ( 42) 00:08:57.517 7316.871 - 7369.510: 2.3474% ( 53) 00:08:57.517 7369.510 - 7422.149: 2.7326% ( 53) 00:08:57.517 7422.149 - 7474.789: 3.1977% ( 64) 00:08:57.517 7474.789 - 7527.428: 3.7791% ( 80) 00:08:57.517 7527.428 - 7580.067: 4.5567% ( 107) 00:08:57.517 7580.067 - 7632.707: 5.2398% ( 94) 00:08:57.517 7632.707 - 7685.346: 6.0392% ( 110) 00:08:57.517 7685.346 - 7737.986: 6.9985% ( 132) 00:08:57.517 7737.986 - 7790.625: 7.7980% ( 110) 00:08:57.517 7790.625 - 7843.264: 8.6265% ( 114) 00:08:57.517 7843.264 - 7895.904: 9.7456% ( 154) 00:08:57.517 7895.904 - 7948.543: 11.0756% ( 183) 00:08:57.517 7948.543 - 8001.182: 12.5799% ( 207) 00:08:57.517 8001.182 - 8053.822: 13.8299% ( 172) 00:08:57.517 8053.822 - 8106.461: 14.9201% ( 150) 00:08:57.517 8106.461 - 8159.100: 16.3590% ( 198) 00:08:57.517 8159.100 - 8211.740: 17.5363% ( 162) 00:08:57.517 8211.740 - 8264.379: 18.9826% ( 199) 00:08:57.517 8264.379 - 8317.018: 20.3125% ( 183) 00:08:57.517 8317.018 - 8369.658: 21.4608% ( 158) 00:08:57.517 8369.658 - 8422.297: 22.7544% ( 178) 00:08:57.517 8422.297 - 8474.937: 24.5858% ( 252) 00:08:57.517 8474.937 - 8527.576: 26.4826% ( 261) 00:08:57.517 8527.576 - 8580.215: 28.3430% ( 256) 00:08:57.517 8580.215 - 8632.855: 30.0000% ( 228) 00:08:57.517 8632.855 - 8685.494: 32.1512% ( 296) 00:08:57.517 8685.494 - 8738.133: 34.1642% ( 277) 00:08:57.517 8738.133 - 8790.773: 35.9230% ( 242) 00:08:57.517 8790.773 - 8843.412: 37.6235% ( 234) 00:08:57.517 8843.412 - 8896.051: 39.2587% ( 225) 00:08:57.517 8896.051 - 8948.691: 41.0610% ( 248) 00:08:57.517 8948.691 - 9001.330: 43.4884% ( 334) 00:08:57.517 9001.330 - 9053.969: 46.6860% ( 440) 00:08:57.517 9053.969 - 9106.609: 49.4840% ( 385) 00:08:57.517 9106.609 - 9159.248: 52.0712% ( 356) 00:08:57.517 9159.248 - 9211.888: 54.6875% ( 360) 00:08:57.518 9211.888 - 9264.527: 58.1686% ( 479) 00:08:57.518 9264.527 - 9317.166: 61.5916% ( 471) 00:08:57.518 9317.166 - 9369.806: 65.3852% ( 522) 00:08:57.518 9369.806 - 9422.445: 68.4811% ( 426) 00:08:57.518 9422.445 - 9475.084: 71.4608% ( 410) 00:08:57.518 9475.084 - 9527.724: 74.2660% ( 386) 00:08:57.518 9527.724 - 9580.363: 76.9767% ( 373) 00:08:57.518 9580.363 - 9633.002: 79.4259% ( 337) 00:08:57.518 9633.002 - 9685.642: 82.0131% ( 356) 00:08:57.518 9685.642 - 9738.281: 84.0044% ( 274) 00:08:57.518 9738.281 - 9790.920: 85.6904% ( 232) 00:08:57.518 9790.920 - 9843.560: 87.7035% ( 277) 00:08:57.518 9843.560 - 9896.199: 89.3387% ( 225) 00:08:57.518 9896.199 - 9948.839: 90.6904% ( 186) 00:08:57.518 9948.839 - 10001.478: 92.0422% ( 186) 00:08:57.518 10001.478 - 10054.117: 93.0233% ( 135) 00:08:57.518 10054.117 - 10106.757: 93.6483% ( 86) 00:08:57.518 10106.757 - 10159.396: 94.0262% ( 52) 00:08:57.518 10159.396 - 10212.035: 94.4113% ( 53) 00:08:57.518 10212.035 - 10264.675: 94.7529% ( 47) 00:08:57.518 10264.675 - 10317.314: 94.9637% ( 29) 00:08:57.518 10317.314 - 10369.953: 95.2253% ( 36) 00:08:57.518 10369.953 - 10422.593: 95.3706% ( 20) 00:08:57.518 10422.593 - 10475.232: 95.4942% ( 17) 00:08:57.518 10475.232 - 10527.871: 95.5451% ( 7) 00:08:57.518 10527.871 - 10580.511: 95.5741% ( 4) 00:08:57.518 10580.511 - 10633.150: 95.6032% ( 4) 00:08:57.518 10633.150 - 10685.790: 95.6395% ( 5) 00:08:57.518 10685.790 - 10738.429: 95.6759% ( 5) 00:08:57.518 10738.429 - 10791.068: 95.7413% ( 9) 00:08:57.518 10791.068 - 10843.708: 95.7922% ( 7) 00:08:57.518 10843.708 - 10896.347: 95.8140% ( 3) 00:08:57.518 11001.626 - 11054.265: 95.8212% ( 1) 00:08:57.518 11054.265 - 11106.904: 95.9302% ( 15) 00:08:57.518 11106.904 - 11159.544: 95.9520% ( 3) 00:08:57.518 11159.544 - 11212.183: 95.9666% ( 2) 00:08:57.518 11212.183 - 11264.822: 95.9956% ( 4) 00:08:57.518 11264.822 - 11317.462: 96.0174% ( 3) 00:08:57.518 11317.462 - 11370.101: 96.0465% ( 4) 00:08:57.518 11370.101 - 11422.741: 96.0901% ( 6) 00:08:57.518 11422.741 - 11475.380: 96.1991% ( 15) 00:08:57.518 11475.380 - 11528.019: 96.3081% ( 15) 00:08:57.518 11528.019 - 11580.659: 96.4026% ( 13) 00:08:57.518 11580.659 - 11633.298: 96.4898% ( 12) 00:08:57.518 11633.298 - 11685.937: 96.6279% ( 19) 00:08:57.518 11685.937 - 11738.577: 96.7078% ( 11) 00:08:57.518 11738.577 - 11791.216: 96.7733% ( 9) 00:08:57.518 11791.216 - 11843.855: 96.8314% ( 8) 00:08:57.518 11843.855 - 11896.495: 96.9404% ( 15) 00:08:57.518 11896.495 - 11949.134: 97.0276% ( 12) 00:08:57.518 11949.134 - 12001.773: 97.2020% ( 24) 00:08:57.518 12001.773 - 12054.413: 97.2892% ( 12) 00:08:57.518 12054.413 - 12107.052: 97.3837% ( 13) 00:08:57.518 12107.052 - 12159.692: 97.4491% ( 9) 00:08:57.518 12159.692 - 12212.331: 97.4927% ( 6) 00:08:57.518 12212.331 - 12264.970: 97.5291% ( 5) 00:08:57.518 12264.970 - 12317.610: 97.5872% ( 8) 00:08:57.518 12317.610 - 12370.249: 97.6163% ( 4) 00:08:57.518 12370.249 - 12422.888: 97.6308% ( 2) 00:08:57.518 12422.888 - 12475.528: 97.6526% ( 3) 00:08:57.518 12475.528 - 12528.167: 97.6744% ( 3) 00:08:57.518 12844.003 - 12896.643: 97.7616% ( 12) 00:08:57.518 12896.643 - 12949.282: 97.8416% ( 11) 00:08:57.518 12949.282 - 13001.921: 97.8488% ( 1) 00:08:57.518 13001.921 - 13054.561: 97.8634% ( 2) 00:08:57.518 13054.561 - 13107.200: 97.8779% ( 2) 00:08:57.518 13159.839 - 13212.479: 97.8924% ( 2) 00:08:57.518 13265.118 - 13317.757: 97.9142% ( 3) 00:08:57.518 13317.757 - 13370.397: 97.9215% ( 1) 00:08:57.518 13423.036 - 13475.676: 97.9433% ( 3) 00:08:57.518 13475.676 - 13580.954: 97.9578% ( 2) 00:08:57.518 13580.954 - 13686.233: 97.9942% ( 5) 00:08:57.518 13686.233 - 13791.512: 98.0305% ( 5) 00:08:57.518 13791.512 - 13896.790: 98.0596% ( 4) 00:08:57.518 13896.790 - 14002.069: 98.0887% ( 4) 00:08:57.518 14002.069 - 14107.348: 98.1177% ( 4) 00:08:57.518 14107.348 - 14212.627: 98.1395% ( 3) 00:08:57.518 16949.873 - 17055.152: 98.1468% ( 1) 00:08:57.518 17160.431 - 17265.709: 98.2485% ( 14) 00:08:57.518 17265.709 - 17370.988: 98.3067% ( 8) 00:08:57.518 17370.988 - 17476.267: 98.4230% ( 16) 00:08:57.518 17476.267 - 17581.545: 98.5320% ( 15) 00:08:57.518 17581.545 - 17686.824: 98.6410% ( 15) 00:08:57.518 17686.824 - 17792.103: 98.7936% ( 21) 00:08:57.518 17792.103 - 17897.382: 98.8590% ( 9) 00:08:57.518 17897.382 - 18002.660: 98.9172% ( 8) 00:08:57.518 18002.660 - 18107.939: 98.9753% ( 8) 00:08:57.518 18107.939 - 18213.218: 99.0189% ( 6) 00:08:57.518 18213.218 - 18318.496: 99.0625% ( 6) 00:08:57.518 18318.496 - 18423.775: 99.0698% ( 1) 00:08:57.518 32215.287 - 32425.844: 99.0916% ( 3) 00:08:57.518 32425.844 - 32636.402: 99.1570% ( 9) 00:08:57.518 32636.402 - 32846.959: 99.2515% ( 13) 00:08:57.518 32846.959 - 33057.516: 99.3387% ( 12) 00:08:57.518 33057.516 - 33268.074: 99.4041% ( 9) 00:08:57.518 33268.074 - 33478.631: 99.4404% ( 5) 00:08:57.518 33478.631 - 33689.189: 99.4549% ( 2) 00:08:57.518 33689.189 - 33899.746: 99.5131% ( 8) 00:08:57.518 33899.746 - 34110.304: 99.5349% ( 3) 00:08:57.518 40427.027 - 40637.584: 99.5930% ( 8) 00:08:57.518 40637.584 - 40848.141: 99.6439% ( 7) 00:08:57.518 40848.141 - 41058.699: 99.6875% ( 6) 00:08:57.518 41058.699 - 41269.256: 99.7311% ( 6) 00:08:57.518 41269.256 - 41479.814: 99.7820% ( 7) 00:08:57.518 41479.814 - 41690.371: 99.8401% ( 8) 00:08:57.518 41690.371 - 41900.929: 99.8837% ( 6) 00:08:57.518 41900.929 - 42111.486: 99.9419% ( 8) 00:08:57.518 42111.486 - 42322.043: 99.9927% ( 7) 00:08:57.518 42322.043 - 42532.601: 100.0000% ( 1) 00:08:57.518 00:08:57.518 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:57.518 ============================================================================== 00:08:57.518 Range in us Cumulative IO count 00:08:57.518 6843.116 - 6895.756: 0.0073% ( 1) 00:08:57.518 6895.756 - 6948.395: 0.0363% ( 4) 00:08:57.518 6948.395 - 7001.035: 0.1090% ( 10) 00:08:57.518 7001.035 - 7053.674: 0.2834% ( 24) 00:08:57.518 7053.674 - 7106.313: 0.5959% ( 43) 00:08:57.518 7106.313 - 7158.953: 0.7994% ( 28) 00:08:57.518 7158.953 - 7211.592: 1.2137% ( 57) 00:08:57.518 7211.592 - 7264.231: 1.3735% ( 22) 00:08:57.518 7264.231 - 7316.871: 1.5189% ( 20) 00:08:57.518 7316.871 - 7369.510: 1.7660% ( 34) 00:08:57.518 7369.510 - 7422.149: 2.3765% ( 84) 00:08:57.518 7422.149 - 7474.789: 3.1686% ( 109) 00:08:57.518 7474.789 - 7527.428: 3.8009% ( 87) 00:08:57.518 7527.428 - 7580.067: 4.5567% ( 104) 00:08:57.518 7580.067 - 7632.707: 5.5087% ( 131) 00:08:57.518 7632.707 - 7685.346: 6.0465% ( 74) 00:08:57.518 7685.346 - 7737.986: 6.8023% ( 104) 00:08:57.518 7737.986 - 7790.625: 7.5218% ( 99) 00:08:57.518 7790.625 - 7843.264: 8.3648% ( 116) 00:08:57.518 7843.264 - 7895.904: 9.5785% ( 167) 00:08:57.518 7895.904 - 7948.543: 10.5523% ( 134) 00:08:57.518 7948.543 - 8001.182: 12.0640% ( 208) 00:08:57.518 8001.182 - 8053.822: 13.1250% ( 146) 00:08:57.518 8053.822 - 8106.461: 14.2587% ( 156) 00:08:57.518 8106.461 - 8159.100: 15.5087% ( 172) 00:08:57.518 8159.100 - 8211.740: 16.9113% ( 193) 00:08:57.518 8211.740 - 8264.379: 18.6919% ( 245) 00:08:57.518 8264.379 - 8317.018: 20.2035% ( 208) 00:08:57.518 8317.018 - 8369.658: 21.7224% ( 209) 00:08:57.518 8369.658 - 8422.297: 23.3939% ( 230) 00:08:57.518 8422.297 - 8474.937: 24.7529% ( 187) 00:08:57.518 8474.937 - 8527.576: 26.2500% ( 206) 00:08:57.518 8527.576 - 8580.215: 27.7253% ( 203) 00:08:57.518 8580.215 - 8632.855: 29.4259% ( 234) 00:08:57.518 8632.855 - 8685.494: 31.0174% ( 219) 00:08:57.518 8685.494 - 8738.133: 33.0959% ( 286) 00:08:57.518 8738.133 - 8790.773: 35.2907% ( 302) 00:08:57.518 8790.773 - 8843.412: 37.8488% ( 352) 00:08:57.518 8843.412 - 8896.051: 40.7122% ( 394) 00:08:57.518 8896.051 - 8948.691: 42.9578% ( 309) 00:08:57.518 8948.691 - 9001.330: 45.2980% ( 322) 00:08:57.518 9001.330 - 9053.969: 46.8387% ( 212) 00:08:57.518 9053.969 - 9106.609: 48.6337% ( 247) 00:08:57.518 9106.609 - 9159.248: 50.6250% ( 274) 00:08:57.518 9159.248 - 9211.888: 53.4520% ( 389) 00:08:57.518 9211.888 - 9264.527: 56.4026% ( 406) 00:08:57.518 9264.527 - 9317.166: 59.7456% ( 460) 00:08:57.518 9317.166 - 9369.806: 63.3430% ( 495) 00:08:57.518 9369.806 - 9422.445: 67.3038% ( 545) 00:08:57.518 9422.445 - 9475.084: 70.9738% ( 505) 00:08:57.518 9475.084 - 9527.724: 74.5785% ( 496) 00:08:57.518 9527.724 - 9580.363: 78.0378% ( 476) 00:08:57.518 9580.363 - 9633.002: 81.2863% ( 447) 00:08:57.518 9633.002 - 9685.642: 84.0044% ( 374) 00:08:57.518 9685.642 - 9738.281: 86.4680% ( 339) 00:08:57.518 9738.281 - 9790.920: 88.2776% ( 249) 00:08:57.518 9790.920 - 9843.560: 89.6948% ( 195) 00:08:57.518 9843.560 - 9896.199: 90.9157% ( 168) 00:08:57.518 9896.199 - 9948.839: 91.9404% ( 141) 00:08:57.518 9948.839 - 10001.478: 92.8561% ( 126) 00:08:57.518 10001.478 - 10054.117: 93.3648% ( 70) 00:08:57.518 10054.117 - 10106.757: 93.8881% ( 72) 00:08:57.518 10106.757 - 10159.396: 94.2805% ( 54) 00:08:57.518 10159.396 - 10212.035: 94.5567% ( 38) 00:08:57.518 10212.035 - 10264.675: 94.8256% ( 37) 00:08:57.518 10264.675 - 10317.314: 95.0073% ( 25) 00:08:57.518 10317.314 - 10369.953: 95.2616% ( 35) 00:08:57.518 10369.953 - 10422.593: 95.3852% ( 17) 00:08:57.518 10422.593 - 10475.232: 95.4578% ( 10) 00:08:57.518 10475.232 - 10527.871: 95.5160% ( 8) 00:08:57.518 10527.871 - 10580.511: 95.5741% ( 8) 00:08:57.518 10580.511 - 10633.150: 95.6177% ( 6) 00:08:57.518 10633.150 - 10685.790: 95.6613% ( 6) 00:08:57.518 10685.790 - 10738.429: 95.6977% ( 5) 00:08:57.518 10738.429 - 10791.068: 95.7413% ( 6) 00:08:57.519 10791.068 - 10843.708: 95.7631% ( 3) 00:08:57.519 10843.708 - 10896.347: 95.7849% ( 3) 00:08:57.519 10896.347 - 10948.986: 95.8067% ( 3) 00:08:57.519 10948.986 - 11001.626: 95.8212% ( 2) 00:08:57.519 11106.904 - 11159.544: 95.8430% ( 3) 00:08:57.519 11159.544 - 11212.183: 95.8721% ( 4) 00:08:57.519 11212.183 - 11264.822: 95.9302% ( 8) 00:08:57.519 11264.822 - 11317.462: 95.9738% ( 6) 00:08:57.519 11317.462 - 11370.101: 96.0174% ( 6) 00:08:57.519 11370.101 - 11422.741: 96.1047% ( 12) 00:08:57.519 11422.741 - 11475.380: 96.1483% ( 6) 00:08:57.519 11475.380 - 11528.019: 96.1773% ( 4) 00:08:57.519 11528.019 - 11580.659: 96.2064% ( 4) 00:08:57.519 11580.659 - 11633.298: 96.2282% ( 3) 00:08:57.519 11633.298 - 11685.937: 96.2718% ( 6) 00:08:57.519 11685.937 - 11738.577: 96.3154% ( 6) 00:08:57.519 11738.577 - 11791.216: 96.3590% ( 6) 00:08:57.519 11791.216 - 11843.855: 96.3881% ( 4) 00:08:57.519 11843.855 - 11896.495: 96.4680% ( 11) 00:08:57.519 11896.495 - 11949.134: 96.5770% ( 15) 00:08:57.519 11949.134 - 12001.773: 96.6933% ( 16) 00:08:57.519 12001.773 - 12054.413: 96.8096% ( 16) 00:08:57.519 12054.413 - 12107.052: 96.9259% ( 16) 00:08:57.519 12107.052 - 12159.692: 97.0203% ( 13) 00:08:57.519 12159.692 - 12212.331: 97.0930% ( 10) 00:08:57.519 12212.331 - 12264.970: 97.1584% ( 9) 00:08:57.519 12264.970 - 12317.610: 97.2020% ( 6) 00:08:57.519 12317.610 - 12370.249: 97.2093% ( 1) 00:08:57.519 12370.249 - 12422.888: 97.2166% ( 1) 00:08:57.519 12475.528 - 12528.167: 97.2384% ( 3) 00:08:57.519 12528.167 - 12580.806: 97.3038% ( 9) 00:08:57.519 12580.806 - 12633.446: 97.3765% ( 10) 00:08:57.519 12633.446 - 12686.085: 97.4346% ( 8) 00:08:57.519 12686.085 - 12738.724: 97.5000% ( 9) 00:08:57.519 12738.724 - 12791.364: 97.5581% ( 8) 00:08:57.519 12791.364 - 12844.003: 97.6526% ( 13) 00:08:57.519 12844.003 - 12896.643: 97.7253% ( 10) 00:08:57.519 12896.643 - 12949.282: 97.8052% ( 11) 00:08:57.519 12949.282 - 13001.921: 97.8561% ( 7) 00:08:57.519 13001.921 - 13054.561: 97.8924% ( 5) 00:08:57.519 13054.561 - 13107.200: 97.9651% ( 10) 00:08:57.519 13107.200 - 13159.839: 98.0087% ( 6) 00:08:57.519 13159.839 - 13212.479: 98.0233% ( 2) 00:08:57.519 13212.479 - 13265.118: 98.0378% ( 2) 00:08:57.519 13265.118 - 13317.757: 98.0523% ( 2) 00:08:57.519 13317.757 - 13370.397: 98.0669% ( 2) 00:08:57.519 13370.397 - 13423.036: 98.0887% ( 3) 00:08:57.519 13423.036 - 13475.676: 98.1032% ( 2) 00:08:57.519 13475.676 - 13580.954: 98.1323% ( 4) 00:08:57.519 13580.954 - 13686.233: 98.1395% ( 1) 00:08:57.519 16528.758 - 16634.037: 98.1977% ( 8) 00:08:57.519 16634.037 - 16739.316: 98.2994% ( 14) 00:08:57.519 16739.316 - 16844.594: 98.3285% ( 4) 00:08:57.519 16844.594 - 16949.873: 98.3503% ( 3) 00:08:57.519 16949.873 - 17055.152: 98.3866% ( 5) 00:08:57.519 17055.152 - 17160.431: 98.4302% ( 6) 00:08:57.519 17160.431 - 17265.709: 98.4666% ( 5) 00:08:57.519 17265.709 - 17370.988: 98.5174% ( 7) 00:08:57.519 17370.988 - 17476.267: 98.5610% ( 6) 00:08:57.519 17476.267 - 17581.545: 98.6047% ( 6) 00:08:57.519 17792.103 - 17897.382: 98.6555% ( 7) 00:08:57.519 17897.382 - 18002.660: 98.7645% ( 15) 00:08:57.519 18002.660 - 18107.939: 98.9026% ( 19) 00:08:57.519 18107.939 - 18213.218: 98.9898% ( 12) 00:08:57.519 18213.218 - 18318.496: 99.0334% ( 6) 00:08:57.519 18318.496 - 18423.775: 99.0698% ( 5) 00:08:57.519 30320.270 - 30530.827: 99.1061% ( 5) 00:08:57.519 30530.827 - 30741.385: 99.1642% ( 8) 00:08:57.519 30741.385 - 30951.942: 99.2151% ( 7) 00:08:57.519 30951.942 - 31162.500: 99.2660% ( 7) 00:08:57.519 31162.500 - 31373.057: 99.3241% ( 8) 00:08:57.519 31373.057 - 31583.614: 99.3750% ( 7) 00:08:57.519 31583.614 - 31794.172: 99.4331% ( 8) 00:08:57.519 31794.172 - 32004.729: 99.4913% ( 8) 00:08:57.519 32004.729 - 32215.287: 99.5349% ( 6) 00:08:57.519 38742.567 - 38953.124: 99.5930% ( 8) 00:08:57.519 38953.124 - 39163.682: 99.6439% ( 7) 00:08:57.519 39163.682 - 39374.239: 99.6948% ( 7) 00:08:57.519 39374.239 - 39584.797: 99.7529% ( 8) 00:08:57.519 39584.797 - 39795.354: 99.8110% ( 8) 00:08:57.519 39795.354 - 40005.912: 99.8692% ( 8) 00:08:57.519 40005.912 - 40216.469: 99.9273% ( 8) 00:08:57.519 40216.469 - 40427.027: 99.9782% ( 7) 00:08:57.519 40427.027 - 40637.584: 100.0000% ( 3) 00:08:57.519 00:08:57.519 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:57.519 ============================================================================== 00:08:57.519 Range in us Cumulative IO count 00:08:57.519 6790.477 - 6843.116: 0.0073% ( 1) 00:08:57.519 6895.756 - 6948.395: 0.0145% ( 1) 00:08:57.519 6948.395 - 7001.035: 0.0291% ( 2) 00:08:57.519 7001.035 - 7053.674: 0.0872% ( 8) 00:08:57.519 7053.674 - 7106.313: 0.2762% ( 26) 00:08:57.519 7106.313 - 7158.953: 0.5015% ( 31) 00:08:57.519 7158.953 - 7211.592: 0.9448% ( 61) 00:08:57.519 7211.592 - 7264.231: 1.2427% ( 41) 00:08:57.519 7264.231 - 7316.871: 1.5407% ( 41) 00:08:57.519 7316.871 - 7369.510: 1.9404% ( 55) 00:08:57.519 7369.510 - 7422.149: 2.4855% ( 75) 00:08:57.519 7422.149 - 7474.789: 3.2267% ( 102) 00:08:57.519 7474.789 - 7527.428: 3.9462% ( 99) 00:08:57.519 7527.428 - 7580.067: 4.5858% ( 88) 00:08:57.519 7580.067 - 7632.707: 5.2471% ( 91) 00:08:57.519 7632.707 - 7685.346: 6.3227% ( 148) 00:08:57.519 7685.346 - 7737.986: 7.3547% ( 142) 00:08:57.519 7737.986 - 7790.625: 8.2485% ( 123) 00:08:57.519 7790.625 - 7843.264: 9.0262% ( 107) 00:08:57.519 7843.264 - 7895.904: 9.8183% ( 109) 00:08:57.519 7895.904 - 7948.543: 10.8721% ( 145) 00:08:57.519 7948.543 - 8001.182: 11.8677% ( 137) 00:08:57.519 8001.182 - 8053.822: 13.5392% ( 230) 00:08:57.519 8053.822 - 8106.461: 14.8474% ( 180) 00:08:57.519 8106.461 - 8159.100: 16.3953% ( 213) 00:08:57.519 8159.100 - 8211.740: 17.8924% ( 206) 00:08:57.519 8211.740 - 8264.379: 19.3169% ( 196) 00:08:57.519 8264.379 - 8317.018: 20.5160% ( 165) 00:08:57.519 8317.018 - 8369.658: 22.0640% ( 213) 00:08:57.519 8369.658 - 8422.297: 23.3358% ( 175) 00:08:57.519 8422.297 - 8474.937: 24.6948% ( 187) 00:08:57.519 8474.937 - 8527.576: 25.9956% ( 179) 00:08:57.519 8527.576 - 8580.215: 27.5945% ( 220) 00:08:57.519 8580.215 - 8632.855: 28.9826% ( 191) 00:08:57.519 8632.855 - 8685.494: 30.6468% ( 229) 00:08:57.519 8685.494 - 8738.133: 32.3692% ( 237) 00:08:57.519 8738.133 - 8790.773: 34.3096% ( 267) 00:08:57.519 8790.773 - 8843.412: 36.4172% ( 290) 00:08:57.519 8843.412 - 8896.051: 38.5392% ( 292) 00:08:57.519 8896.051 - 8948.691: 41.0465% ( 345) 00:08:57.519 8948.691 - 9001.330: 43.8881% ( 391) 00:08:57.519 9001.330 - 9053.969: 46.2645% ( 327) 00:08:57.519 9053.969 - 9106.609: 48.9462% ( 369) 00:08:57.519 9106.609 - 9159.248: 51.6933% ( 378) 00:08:57.519 9159.248 - 9211.888: 54.4913% ( 385) 00:08:57.519 9211.888 - 9264.527: 57.2820% ( 384) 00:08:57.519 9264.527 - 9317.166: 60.6541% ( 464) 00:08:57.519 9317.166 - 9369.806: 63.9680% ( 456) 00:08:57.519 9369.806 - 9422.445: 67.3619% ( 467) 00:08:57.519 9422.445 - 9475.084: 70.9884% ( 499) 00:08:57.519 9475.084 - 9527.724: 74.3314% ( 460) 00:08:57.519 9527.724 - 9580.363: 77.5218% ( 439) 00:08:57.519 9580.363 - 9633.002: 80.6395% ( 429) 00:08:57.519 9633.002 - 9685.642: 83.3794% ( 377) 00:08:57.519 9685.642 - 9738.281: 85.7122% ( 321) 00:08:57.519 9738.281 - 9790.920: 87.6235% ( 263) 00:08:57.519 9790.920 - 9843.560: 89.1860% ( 215) 00:08:57.519 9843.560 - 9896.199: 90.6250% ( 198) 00:08:57.519 9896.199 - 9948.839: 91.6061% ( 135) 00:08:57.519 9948.839 - 10001.478: 92.5000% ( 123) 00:08:57.519 10001.478 - 10054.117: 93.1395% ( 88) 00:08:57.519 10054.117 - 10106.757: 93.9244% ( 108) 00:08:57.519 10106.757 - 10159.396: 94.3823% ( 63) 00:08:57.519 10159.396 - 10212.035: 94.7384% ( 49) 00:08:57.519 10212.035 - 10264.675: 94.9927% ( 35) 00:08:57.519 10264.675 - 10317.314: 95.2471% ( 35) 00:08:57.519 10317.314 - 10369.953: 95.3852% ( 19) 00:08:57.519 10369.953 - 10422.593: 95.5087% ( 17) 00:08:57.519 10422.593 - 10475.232: 95.6032% ( 13) 00:08:57.519 10475.232 - 10527.871: 95.6395% ( 5) 00:08:57.519 10527.871 - 10580.511: 95.6904% ( 7) 00:08:57.519 10580.511 - 10633.150: 95.7558% ( 9) 00:08:57.519 10633.150 - 10685.790: 95.8358% ( 11) 00:08:57.519 10685.790 - 10738.429: 95.9593% ( 17) 00:08:57.519 10738.429 - 10791.068: 96.1919% ( 32) 00:08:57.519 10791.068 - 10843.708: 96.3154% ( 17) 00:08:57.519 10843.708 - 10896.347: 96.3953% ( 11) 00:08:57.519 10896.347 - 10948.986: 96.5480% ( 21) 00:08:57.519 10948.986 - 11001.626: 96.6061% ( 8) 00:08:57.519 11001.626 - 11054.265: 96.6497% ( 6) 00:08:57.519 11054.265 - 11106.904: 96.6788% ( 4) 00:08:57.519 11106.904 - 11159.544: 96.7006% ( 3) 00:08:57.519 11159.544 - 11212.183: 96.7224% ( 3) 00:08:57.519 11212.183 - 11264.822: 96.7369% ( 2) 00:08:57.519 11264.822 - 11317.462: 96.7442% ( 1) 00:08:57.519 11422.741 - 11475.380: 96.7515% ( 1) 00:08:57.519 11580.659 - 11633.298: 96.7587% ( 1) 00:08:57.519 11633.298 - 11685.937: 96.7733% ( 2) 00:08:57.519 11685.937 - 11738.577: 96.7951% ( 3) 00:08:57.519 11738.577 - 11791.216: 96.8241% ( 4) 00:08:57.519 11791.216 - 11843.855: 96.8532% ( 4) 00:08:57.519 11843.855 - 11896.495: 96.9041% ( 7) 00:08:57.519 11896.495 - 11949.134: 96.9549% ( 7) 00:08:57.519 11949.134 - 12001.773: 97.0203% ( 9) 00:08:57.519 12001.773 - 12054.413: 97.0785% ( 8) 00:08:57.519 12054.413 - 12107.052: 97.1584% ( 11) 00:08:57.519 12107.052 - 12159.692: 97.2456% ( 12) 00:08:57.519 12159.692 - 12212.331: 97.3038% ( 8) 00:08:57.519 12212.331 - 12264.970: 97.3692% ( 9) 00:08:57.519 12264.970 - 12317.610: 97.4346% ( 9) 00:08:57.520 12317.610 - 12370.249: 97.4782% ( 6) 00:08:57.520 12370.249 - 12422.888: 97.5000% ( 3) 00:08:57.520 12422.888 - 12475.528: 97.5363% ( 5) 00:08:57.520 12475.528 - 12528.167: 97.5581% ( 3) 00:08:57.520 12528.167 - 12580.806: 97.5727% ( 2) 00:08:57.520 12580.806 - 12633.446: 97.5872% ( 2) 00:08:57.520 12633.446 - 12686.085: 97.6017% ( 2) 00:08:57.520 12686.085 - 12738.724: 97.6163% ( 2) 00:08:57.520 12738.724 - 12791.364: 97.6235% ( 1) 00:08:57.520 12791.364 - 12844.003: 97.6381% ( 2) 00:08:57.520 12844.003 - 12896.643: 97.6453% ( 1) 00:08:57.520 12896.643 - 12949.282: 97.6672% ( 3) 00:08:57.520 12949.282 - 13001.921: 97.6744% ( 1) 00:08:57.520 13791.512 - 13896.790: 97.6817% ( 1) 00:08:57.520 13896.790 - 14002.069: 97.7616% ( 11) 00:08:57.520 14002.069 - 14107.348: 97.9651% ( 28) 00:08:57.520 14107.348 - 14212.627: 98.0814% ( 16) 00:08:57.520 14212.627 - 14317.905: 98.1250% ( 6) 00:08:57.520 14317.905 - 14423.184: 98.1395% ( 2) 00:08:57.520 15581.250 - 15686.529: 98.1904% ( 7) 00:08:57.520 15686.529 - 15791.807: 98.2340% ( 6) 00:08:57.520 15791.807 - 15897.086: 98.2776% ( 6) 00:08:57.520 15897.086 - 16002.365: 98.3212% ( 6) 00:08:57.520 16002.365 - 16107.643: 98.3648% ( 6) 00:08:57.520 16107.643 - 16212.922: 98.4012% ( 5) 00:08:57.520 16212.922 - 16318.201: 98.4375% ( 5) 00:08:57.520 16318.201 - 16423.480: 98.4811% ( 6) 00:08:57.520 16423.480 - 16528.758: 98.5174% ( 5) 00:08:57.520 16528.758 - 16634.037: 98.5610% ( 6) 00:08:57.520 16634.037 - 16739.316: 98.5974% ( 5) 00:08:57.520 16739.316 - 16844.594: 98.6047% ( 1) 00:08:57.520 18002.660 - 18107.939: 98.6337% ( 4) 00:08:57.520 18107.939 - 18213.218: 98.6773% ( 6) 00:08:57.520 18213.218 - 18318.496: 98.7209% ( 6) 00:08:57.520 18318.496 - 18423.775: 98.7645% ( 6) 00:08:57.520 18423.775 - 18529.054: 98.8081% ( 6) 00:08:57.520 18529.054 - 18634.333: 98.8517% ( 6) 00:08:57.520 18634.333 - 18739.611: 98.9026% ( 7) 00:08:57.520 18739.611 - 18844.890: 98.9462% ( 6) 00:08:57.520 18844.890 - 18950.169: 98.9971% ( 7) 00:08:57.520 18950.169 - 19055.447: 99.0407% ( 6) 00:08:57.520 19055.447 - 19160.726: 99.0698% ( 4) 00:08:57.520 29899.155 - 30109.712: 99.1206% ( 7) 00:08:57.520 30109.712 - 30320.270: 99.1788% ( 8) 00:08:57.520 30320.270 - 30530.827: 99.2369% ( 8) 00:08:57.520 30530.827 - 30741.385: 99.2951% ( 8) 00:08:57.520 30741.385 - 30951.942: 99.3532% ( 8) 00:08:57.520 30951.942 - 31162.500: 99.4113% ( 8) 00:08:57.520 31162.500 - 31373.057: 99.4695% ( 8) 00:08:57.520 31373.057 - 31583.614: 99.5276% ( 8) 00:08:57.520 31583.614 - 31794.172: 99.5349% ( 1) 00:08:57.520 37268.665 - 37479.222: 99.5567% ( 3) 00:08:57.520 37479.222 - 37689.780: 99.6148% ( 8) 00:08:57.520 37689.780 - 37900.337: 99.6657% ( 7) 00:08:57.520 37900.337 - 38110.895: 99.7166% ( 7) 00:08:57.520 38110.895 - 38321.452: 99.7747% ( 8) 00:08:57.520 38321.452 - 38532.010: 99.8256% ( 7) 00:08:57.520 38532.010 - 38742.567: 99.8837% ( 8) 00:08:57.520 38742.567 - 38953.124: 99.9419% ( 8) 00:08:57.520 38953.124 - 39163.682: 100.0000% ( 8) 00:08:57.520 00:08:57.520 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:57.520 ============================================================================== 00:08:57.520 Range in us Cumulative IO count 00:08:57.520 6948.395 - 7001.035: 0.0073% ( 1) 00:08:57.520 7001.035 - 7053.674: 0.0945% ( 12) 00:08:57.520 7053.674 - 7106.313: 0.2253% ( 18) 00:08:57.520 7106.313 - 7158.953: 0.3924% ( 23) 00:08:57.520 7158.953 - 7211.592: 0.6105% ( 30) 00:08:57.520 7211.592 - 7264.231: 1.0828% ( 65) 00:08:57.520 7264.231 - 7316.871: 1.4680% ( 53) 00:08:57.520 7316.871 - 7369.510: 1.9186% ( 62) 00:08:57.520 7369.510 - 7422.149: 2.4491% ( 73) 00:08:57.520 7422.149 - 7474.789: 2.9506% ( 69) 00:08:57.520 7474.789 - 7527.428: 3.4084% ( 63) 00:08:57.520 7527.428 - 7580.067: 4.1352% ( 100) 00:08:57.520 7580.067 - 7632.707: 5.0509% ( 126) 00:08:57.520 7632.707 - 7685.346: 6.1773% ( 155) 00:08:57.520 7685.346 - 7737.986: 7.5727% ( 192) 00:08:57.520 7737.986 - 7790.625: 8.8663% ( 178) 00:08:57.520 7790.625 - 7843.264: 9.9709% ( 152) 00:08:57.520 7843.264 - 7895.904: 11.0610% ( 150) 00:08:57.520 7895.904 - 7948.543: 12.0058% ( 130) 00:08:57.520 7948.543 - 8001.182: 12.9506% ( 130) 00:08:57.520 8001.182 - 8053.822: 13.8372% ( 122) 00:08:57.520 8053.822 - 8106.461: 14.8474% ( 139) 00:08:57.520 8106.461 - 8159.100: 16.3590% ( 208) 00:08:57.520 8159.100 - 8211.740: 17.6817% ( 182) 00:08:57.520 8211.740 - 8264.379: 18.9971% ( 181) 00:08:57.520 8264.379 - 8317.018: 20.2398% ( 171) 00:08:57.520 8317.018 - 8369.658: 21.4753% ( 170) 00:08:57.520 8369.658 - 8422.297: 22.8416% ( 188) 00:08:57.520 8422.297 - 8474.937: 24.1061% ( 174) 00:08:57.520 8474.937 - 8527.576: 25.6759% ( 216) 00:08:57.520 8527.576 - 8580.215: 27.4201% ( 240) 00:08:57.520 8580.215 - 8632.855: 29.1352% ( 236) 00:08:57.520 8632.855 - 8685.494: 30.7558% ( 223) 00:08:57.520 8685.494 - 8738.133: 32.4346% ( 231) 00:08:57.520 8738.133 - 8790.773: 33.9535% ( 209) 00:08:57.520 8790.773 - 8843.412: 35.6977% ( 240) 00:08:57.520 8843.412 - 8896.051: 37.4855% ( 246) 00:08:57.520 8896.051 - 8948.691: 39.5422% ( 283) 00:08:57.520 8948.691 - 9001.330: 41.5988% ( 283) 00:08:57.520 9001.330 - 9053.969: 44.1642% ( 353) 00:08:57.520 9053.969 - 9106.609: 46.9767% ( 387) 00:08:57.520 9106.609 - 9159.248: 49.7965% ( 388) 00:08:57.520 9159.248 - 9211.888: 53.2703% ( 478) 00:08:57.520 9211.888 - 9264.527: 57.1148% ( 529) 00:08:57.520 9264.527 - 9317.166: 60.9738% ( 531) 00:08:57.520 9317.166 - 9369.806: 65.0872% ( 566) 00:08:57.520 9369.806 - 9422.445: 68.8808% ( 522) 00:08:57.520 9422.445 - 9475.084: 72.3474% ( 477) 00:08:57.520 9475.084 - 9527.724: 75.6395% ( 453) 00:08:57.520 9527.724 - 9580.363: 78.6119% ( 409) 00:08:57.520 9580.363 - 9633.002: 81.2209% ( 359) 00:08:57.520 9633.002 - 9685.642: 83.7573% ( 349) 00:08:57.520 9685.642 - 9738.281: 86.2791% ( 347) 00:08:57.520 9738.281 - 9790.920: 88.4811% ( 303) 00:08:57.520 9790.920 - 9843.560: 90.4142% ( 266) 00:08:57.520 9843.560 - 9896.199: 92.0494% ( 225) 00:08:57.520 9896.199 - 9948.839: 93.0814% ( 142) 00:08:57.520 9948.839 - 10001.478: 93.7427% ( 91) 00:08:57.520 10001.478 - 10054.117: 94.1715% ( 59) 00:08:57.520 10054.117 - 10106.757: 94.5494% ( 52) 00:08:57.520 10106.757 - 10159.396: 94.7529% ( 28) 00:08:57.520 10159.396 - 10212.035: 94.8910% ( 19) 00:08:57.520 10212.035 - 10264.675: 95.0727% ( 25) 00:08:57.520 10264.675 - 10317.314: 95.1744% ( 14) 00:08:57.520 10317.314 - 10369.953: 95.2762% ( 14) 00:08:57.520 10369.953 - 10422.593: 95.3706% ( 13) 00:08:57.520 10422.593 - 10475.232: 95.4288% ( 8) 00:08:57.520 10475.232 - 10527.871: 95.4578% ( 4) 00:08:57.520 10527.871 - 10580.511: 95.4869% ( 4) 00:08:57.520 10580.511 - 10633.150: 95.5669% ( 11) 00:08:57.520 10633.150 - 10685.790: 95.6323% ( 9) 00:08:57.520 10685.790 - 10738.429: 95.7558% ( 17) 00:08:57.520 10738.429 - 10791.068: 96.0247% ( 37) 00:08:57.520 10791.068 - 10843.708: 96.1846% ( 22) 00:08:57.520 10843.708 - 10896.347: 96.3735% ( 26) 00:08:57.520 10896.347 - 10948.986: 96.4898% ( 16) 00:08:57.520 10948.986 - 11001.626: 96.5770% ( 12) 00:08:57.520 11001.626 - 11054.265: 96.6424% ( 9) 00:08:57.520 11054.265 - 11106.904: 96.7006% ( 8) 00:08:57.520 11106.904 - 11159.544: 96.7297% ( 4) 00:08:57.520 11159.544 - 11212.183: 96.7442% ( 2) 00:08:57.520 11685.937 - 11738.577: 96.7587% ( 2) 00:08:57.520 11738.577 - 11791.216: 96.7805% ( 3) 00:08:57.520 11791.216 - 11843.855: 96.8023% ( 3) 00:08:57.520 11843.855 - 11896.495: 96.8241% ( 3) 00:08:57.520 11896.495 - 11949.134: 96.8387% ( 2) 00:08:57.520 11949.134 - 12001.773: 96.8605% ( 3) 00:08:57.520 12001.773 - 12054.413: 96.8895% ( 4) 00:08:57.520 12054.413 - 12107.052: 96.9186% ( 4) 00:08:57.520 12107.052 - 12159.692: 96.9840% ( 9) 00:08:57.520 12159.692 - 12212.331: 97.0858% ( 14) 00:08:57.520 12212.331 - 12264.970: 97.1948% ( 15) 00:08:57.520 12264.970 - 12317.610: 97.2602% ( 9) 00:08:57.520 12317.610 - 12370.249: 97.3038% ( 6) 00:08:57.520 12370.249 - 12422.888: 97.3692% ( 9) 00:08:57.520 12422.888 - 12475.528: 97.4201% ( 7) 00:08:57.520 12475.528 - 12528.167: 97.4709% ( 7) 00:08:57.520 12528.167 - 12580.806: 97.5073% ( 5) 00:08:57.520 12580.806 - 12633.446: 97.5363% ( 4) 00:08:57.520 12633.446 - 12686.085: 97.5509% ( 2) 00:08:57.520 12686.085 - 12738.724: 97.5654% ( 2) 00:08:57.520 12738.724 - 12791.364: 97.5799% ( 2) 00:08:57.520 12791.364 - 12844.003: 97.5945% ( 2) 00:08:57.520 12844.003 - 12896.643: 97.6090% ( 2) 00:08:57.520 12896.643 - 12949.282: 97.6235% ( 2) 00:08:57.520 12949.282 - 13001.921: 97.6308% ( 1) 00:08:57.520 13001.921 - 13054.561: 97.6453% ( 2) 00:08:57.520 13054.561 - 13107.200: 97.6526% ( 1) 00:08:57.520 13107.200 - 13159.839: 97.6672% ( 2) 00:08:57.520 13159.839 - 13212.479: 97.6744% ( 1) 00:08:57.520 14844.299 - 14949.578: 97.7326% ( 8) 00:08:57.520 14949.578 - 15054.856: 97.9360% ( 28) 00:08:57.520 15054.856 - 15160.135: 98.1250% ( 26) 00:08:57.520 15160.135 - 15265.414: 98.2922% ( 23) 00:08:57.520 15265.414 - 15370.692: 98.3866% ( 13) 00:08:57.520 15370.692 - 15475.971: 98.4084% ( 3) 00:08:57.520 15475.971 - 15581.250: 98.4448% ( 5) 00:08:57.520 15581.250 - 15686.529: 98.4811% ( 5) 00:08:57.520 15686.529 - 15791.807: 98.5247% ( 6) 00:08:57.520 15791.807 - 15897.086: 98.5683% ( 6) 00:08:57.520 15897.086 - 16002.365: 98.6047% ( 5) 00:08:57.520 18529.054 - 18634.333: 98.6410% ( 5) 00:08:57.520 18634.333 - 18739.611: 98.7209% ( 11) 00:08:57.520 18739.611 - 18844.890: 98.7500% ( 4) 00:08:57.520 18844.890 - 18950.169: 98.8009% ( 7) 00:08:57.521 18950.169 - 19055.447: 98.8517% ( 7) 00:08:57.521 19055.447 - 19160.726: 98.8953% ( 6) 00:08:57.521 19160.726 - 19266.005: 98.9462% ( 7) 00:08:57.521 19266.005 - 19371.284: 98.9971% ( 7) 00:08:57.521 19371.284 - 19476.562: 99.0407% ( 6) 00:08:57.521 19476.562 - 19581.841: 99.0698% ( 4) 00:08:57.521 28004.138 - 28214.696: 99.0988% ( 4) 00:08:57.521 28214.696 - 28425.253: 99.1570% ( 8) 00:08:57.521 28425.253 - 28635.810: 99.2151% ( 8) 00:08:57.521 28635.810 - 28846.368: 99.2660% ( 7) 00:08:57.521 28846.368 - 29056.925: 99.3241% ( 8) 00:08:57.521 29056.925 - 29267.483: 99.3823% ( 8) 00:08:57.521 29267.483 - 29478.040: 99.4477% ( 9) 00:08:57.521 29478.040 - 29688.598: 99.4985% ( 7) 00:08:57.521 29688.598 - 29899.155: 99.5349% ( 5) 00:08:57.521 35163.091 - 35373.648: 99.5494% ( 2) 00:08:57.521 35373.648 - 35584.206: 99.6076% ( 8) 00:08:57.521 35584.206 - 35794.763: 99.6584% ( 7) 00:08:57.521 35794.763 - 36005.320: 99.7093% ( 7) 00:08:57.521 36005.320 - 36215.878: 99.7602% ( 7) 00:08:57.521 36215.878 - 36426.435: 99.8110% ( 7) 00:08:57.521 36426.435 - 36636.993: 99.8692% ( 8) 00:08:57.521 36636.993 - 36847.550: 99.9273% ( 8) 00:08:57.521 36847.550 - 37058.108: 99.9855% ( 8) 00:08:57.521 37058.108 - 37268.665: 100.0000% ( 2) 00:08:57.521 00:08:57.521 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:57.521 ============================================================================== 00:08:57.521 Range in us Cumulative IO count 00:08:57.521 6737.838 - 6790.477: 0.0073% ( 1) 00:08:57.521 6895.756 - 6948.395: 0.0727% ( 9) 00:08:57.521 6948.395 - 7001.035: 0.1381% ( 9) 00:08:57.521 7001.035 - 7053.674: 0.2834% ( 20) 00:08:57.521 7053.674 - 7106.313: 0.4070% ( 17) 00:08:57.521 7106.313 - 7158.953: 0.5959% ( 26) 00:08:57.521 7158.953 - 7211.592: 0.8140% ( 30) 00:08:57.521 7211.592 - 7264.231: 1.0538% ( 33) 00:08:57.521 7264.231 - 7316.871: 1.3299% ( 38) 00:08:57.521 7316.871 - 7369.510: 1.6788% ( 48) 00:08:57.521 7369.510 - 7422.149: 2.1730% ( 68) 00:08:57.521 7422.149 - 7474.789: 2.8779% ( 97) 00:08:57.521 7474.789 - 7527.428: 3.8590% ( 135) 00:08:57.521 7527.428 - 7580.067: 4.5858% ( 100) 00:08:57.521 7580.067 - 7632.707: 5.4215% ( 115) 00:08:57.521 7632.707 - 7685.346: 6.1483% ( 100) 00:08:57.521 7685.346 - 7737.986: 6.8677% ( 99) 00:08:57.521 7737.986 - 7790.625: 7.8052% ( 129) 00:08:57.521 7790.625 - 7843.264: 8.7427% ( 129) 00:08:57.521 7843.264 - 7895.904: 9.7747% ( 142) 00:08:57.521 7895.904 - 7948.543: 10.9448% ( 161) 00:08:57.521 7948.543 - 8001.182: 12.2747% ( 183) 00:08:57.521 8001.182 - 8053.822: 13.7573% ( 204) 00:08:57.521 8053.822 - 8106.461: 15.2834% ( 210) 00:08:57.521 8106.461 - 8159.100: 16.5988% ( 181) 00:08:57.521 8159.100 - 8211.740: 17.8125% ( 167) 00:08:57.521 8211.740 - 8264.379: 19.0262% ( 167) 00:08:57.521 8264.379 - 8317.018: 20.2834% ( 173) 00:08:57.521 8317.018 - 8369.658: 21.6134% ( 183) 00:08:57.521 8369.658 - 8422.297: 22.7616% ( 158) 00:08:57.521 8422.297 - 8474.937: 24.0334% ( 175) 00:08:57.521 8474.937 - 8527.576: 25.2616% ( 169) 00:08:57.521 8527.576 - 8580.215: 26.8023% ( 212) 00:08:57.521 8580.215 - 8632.855: 28.6192% ( 250) 00:08:57.521 8632.855 - 8685.494: 30.2326% ( 222) 00:08:57.521 8685.494 - 8738.133: 32.2020% ( 271) 00:08:57.521 8738.133 - 8790.773: 34.0189% ( 250) 00:08:57.521 8790.773 - 8843.412: 36.1192% ( 289) 00:08:57.521 8843.412 - 8896.051: 38.1250% ( 276) 00:08:57.521 8896.051 - 8948.691: 40.3852% ( 311) 00:08:57.521 8948.691 - 9001.330: 42.7471% ( 325) 00:08:57.521 9001.330 - 9053.969: 45.4797% ( 376) 00:08:57.521 9053.969 - 9106.609: 48.2195% ( 377) 00:08:57.521 9106.609 - 9159.248: 51.4680% ( 447) 00:08:57.521 9159.248 - 9211.888: 54.4622% ( 412) 00:08:57.521 9211.888 - 9264.527: 57.5363% ( 423) 00:08:57.521 9264.527 - 9317.166: 60.6250% ( 425) 00:08:57.521 9317.166 - 9369.806: 63.8590% ( 445) 00:08:57.521 9369.806 - 9422.445: 67.5509% ( 508) 00:08:57.521 9422.445 - 9475.084: 71.1846% ( 500) 00:08:57.521 9475.084 - 9527.724: 74.9782% ( 522) 00:08:57.521 9527.724 - 9580.363: 78.4956% ( 484) 00:08:57.521 9580.363 - 9633.002: 81.9840% ( 480) 00:08:57.521 9633.002 - 9685.642: 84.6512% ( 367) 00:08:57.521 9685.642 - 9738.281: 87.2020% ( 351) 00:08:57.521 9738.281 - 9790.920: 89.2297% ( 279) 00:08:57.521 9790.920 - 9843.560: 90.6395% ( 194) 00:08:57.521 9843.560 - 9896.199: 91.7878% ( 158) 00:08:57.521 9896.199 - 9948.839: 92.9433% ( 159) 00:08:57.521 9948.839 - 10001.478: 93.7427% ( 110) 00:08:57.521 10001.478 - 10054.117: 94.3605% ( 85) 00:08:57.521 10054.117 - 10106.757: 94.7166% ( 49) 00:08:57.521 10106.757 - 10159.396: 95.0581% ( 47) 00:08:57.521 10159.396 - 10212.035: 95.2035% ( 20) 00:08:57.521 10212.035 - 10264.675: 95.3052% ( 14) 00:08:57.521 10264.675 - 10317.314: 95.3416% ( 5) 00:08:57.521 10317.314 - 10369.953: 95.3561% ( 2) 00:08:57.521 10369.953 - 10422.593: 95.3852% ( 4) 00:08:57.521 10422.593 - 10475.232: 95.4360% ( 7) 00:08:57.521 10475.232 - 10527.871: 95.5305% ( 13) 00:08:57.521 10527.871 - 10580.511: 95.7122% ( 25) 00:08:57.521 10580.511 - 10633.150: 95.8430% ( 18) 00:08:57.521 10633.150 - 10685.790: 95.9593% ( 16) 00:08:57.521 10685.790 - 10738.429: 96.1265% ( 23) 00:08:57.521 10738.429 - 10791.068: 96.1919% ( 9) 00:08:57.521 10791.068 - 10843.708: 96.2355% ( 6) 00:08:57.521 10843.708 - 10896.347: 96.2791% ( 6) 00:08:57.521 10896.347 - 10948.986: 96.3227% ( 6) 00:08:57.521 10948.986 - 11001.626: 96.3808% ( 8) 00:08:57.521 11001.626 - 11054.265: 96.4244% ( 6) 00:08:57.521 11054.265 - 11106.904: 96.4971% ( 10) 00:08:57.521 11106.904 - 11159.544: 96.5698% ( 10) 00:08:57.521 11159.544 - 11212.183: 96.6061% ( 5) 00:08:57.521 11212.183 - 11264.822: 96.6497% ( 6) 00:08:57.521 11264.822 - 11317.462: 96.6570% ( 1) 00:08:57.521 11317.462 - 11370.101: 96.6788% ( 3) 00:08:57.521 11370.101 - 11422.741: 96.6933% ( 2) 00:08:57.521 11422.741 - 11475.380: 96.7151% ( 3) 00:08:57.521 11475.380 - 11528.019: 96.7369% ( 3) 00:08:57.521 11528.019 - 11580.659: 96.7515% ( 2) 00:08:57.521 11685.937 - 11738.577: 96.7733% ( 3) 00:08:57.521 11738.577 - 11791.216: 96.8023% ( 4) 00:08:57.521 11791.216 - 11843.855: 96.8387% ( 5) 00:08:57.521 11843.855 - 11896.495: 96.8823% ( 6) 00:08:57.521 11896.495 - 11949.134: 96.9186% ( 5) 00:08:57.521 11949.134 - 12001.773: 96.9549% ( 5) 00:08:57.521 12001.773 - 12054.413: 96.9985% ( 6) 00:08:57.521 12054.413 - 12107.052: 97.0712% ( 10) 00:08:57.521 12107.052 - 12159.692: 97.1439% ( 10) 00:08:57.521 12159.692 - 12212.331: 97.2020% ( 8) 00:08:57.521 12212.331 - 12264.970: 97.2456% ( 6) 00:08:57.521 12264.970 - 12317.610: 97.2892% ( 6) 00:08:57.521 12317.610 - 12370.249: 97.3692% ( 11) 00:08:57.521 12370.249 - 12422.888: 97.4201% ( 7) 00:08:57.521 12422.888 - 12475.528: 97.4637% ( 6) 00:08:57.521 12475.528 - 12528.167: 97.5073% ( 6) 00:08:57.521 12528.167 - 12580.806: 97.5363% ( 4) 00:08:57.521 12580.806 - 12633.446: 97.5581% ( 3) 00:08:57.521 12633.446 - 12686.085: 97.5872% ( 4) 00:08:57.521 12686.085 - 12738.724: 97.6163% ( 4) 00:08:57.521 12738.724 - 12791.364: 97.6308% ( 2) 00:08:57.521 12791.364 - 12844.003: 97.6526% ( 3) 00:08:57.521 12844.003 - 12896.643: 97.6672% ( 2) 00:08:57.521 12896.643 - 12949.282: 97.6744% ( 1) 00:08:57.521 14107.348 - 14212.627: 97.7180% ( 6) 00:08:57.521 14212.627 - 14317.905: 97.7907% ( 10) 00:08:57.521 14317.905 - 14423.184: 97.8634% ( 10) 00:08:57.521 14423.184 - 14528.463: 97.8924% ( 4) 00:08:57.521 14528.463 - 14633.741: 97.9215% ( 4) 00:08:57.521 14633.741 - 14739.020: 97.9506% ( 4) 00:08:57.521 14739.020 - 14844.299: 97.9942% ( 6) 00:08:57.521 14844.299 - 14949.578: 98.0305% ( 5) 00:08:57.521 14949.578 - 15054.856: 98.0741% ( 6) 00:08:57.521 15054.856 - 15160.135: 98.1177% ( 6) 00:08:57.521 15160.135 - 15265.414: 98.1395% ( 3) 00:08:57.522 15581.250 - 15686.529: 98.1468% ( 1) 00:08:57.522 15686.529 - 15791.807: 98.1904% ( 6) 00:08:57.522 15791.807 - 15897.086: 98.3212% ( 18) 00:08:57.522 15897.086 - 16002.365: 98.5320% ( 29) 00:08:57.522 16002.365 - 16107.643: 98.6047% ( 10) 00:08:57.522 18950.169 - 19055.447: 98.6337% ( 4) 00:08:57.522 19055.447 - 19160.726: 98.7137% ( 11) 00:08:57.522 19160.726 - 19266.005: 98.7718% ( 8) 00:08:57.522 19266.005 - 19371.284: 98.8081% ( 5) 00:08:57.522 19371.284 - 19476.562: 98.8445% ( 5) 00:08:57.522 19476.562 - 19581.841: 98.8881% ( 6) 00:08:57.522 19581.841 - 19687.120: 98.9390% ( 7) 00:08:57.522 19687.120 - 19792.398: 98.9826% ( 6) 00:08:57.522 19792.398 - 19897.677: 99.0334% ( 7) 00:08:57.522 19897.677 - 20002.956: 99.0698% ( 5) 00:08:57.522 26635.515 - 26740.794: 99.0916% ( 3) 00:08:57.522 26740.794 - 26846.072: 99.1279% ( 5) 00:08:57.522 26846.072 - 26951.351: 99.1497% ( 3) 00:08:57.522 26951.351 - 27161.908: 99.2078% ( 8) 00:08:57.522 27161.908 - 27372.466: 99.2587% ( 7) 00:08:57.522 27372.466 - 27583.023: 99.3096% ( 7) 00:08:57.522 27583.023 - 27793.581: 99.3677% ( 8) 00:08:57.522 27793.581 - 28004.138: 99.4259% ( 8) 00:08:57.522 28004.138 - 28214.696: 99.4840% ( 8) 00:08:57.522 28214.696 - 28425.253: 99.5349% ( 7) 00:08:57.522 33478.631 - 33689.189: 99.5422% ( 1) 00:08:57.522 33689.189 - 33899.746: 99.5930% ( 7) 00:08:57.522 33899.746 - 34110.304: 99.6512% ( 8) 00:08:57.522 34110.304 - 34320.861: 99.7020% ( 7) 00:08:57.522 34320.861 - 34531.418: 99.7602% ( 8) 00:08:57.522 34531.418 - 34741.976: 99.8183% ( 8) 00:08:57.522 34741.976 - 34952.533: 99.8692% ( 7) 00:08:57.522 34952.533 - 35163.091: 99.9273% ( 8) 00:08:57.522 35163.091 - 35373.648: 99.9855% ( 8) 00:08:57.522 35373.648 - 35584.206: 100.0000% ( 2) 00:08:57.522 00:08:57.522 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:57.522 ============================================================================== 00:08:57.522 Range in us Cumulative IO count 00:08:57.522 6895.756 - 6948.395: 0.0145% ( 2) 00:08:57.522 6948.395 - 7001.035: 0.1374% ( 17) 00:08:57.522 7001.035 - 7053.674: 0.2821% ( 20) 00:08:57.522 7053.674 - 7106.313: 0.6438% ( 50) 00:08:57.522 7106.313 - 7158.953: 0.8319% ( 26) 00:08:57.522 7158.953 - 7211.592: 1.0561% ( 31) 00:08:57.522 7211.592 - 7264.231: 1.2659% ( 29) 00:08:57.522 7264.231 - 7316.871: 1.3672% ( 14) 00:08:57.522 7316.871 - 7369.510: 1.5119% ( 20) 00:08:57.522 7369.510 - 7422.149: 1.7144% ( 28) 00:08:57.522 7422.149 - 7474.789: 2.0399% ( 45) 00:08:57.522 7474.789 - 7527.428: 2.6837% ( 89) 00:08:57.522 7527.428 - 7580.067: 3.5012% ( 113) 00:08:57.522 7580.067 - 7632.707: 4.4488% ( 131) 00:08:57.522 7632.707 - 7685.346: 5.7436% ( 179) 00:08:57.522 7685.346 - 7737.986: 6.9444% ( 166) 00:08:57.522 7737.986 - 7790.625: 8.2321% ( 178) 00:08:57.522 7790.625 - 7843.264: 9.3678% ( 157) 00:08:57.522 7843.264 - 7895.904: 10.4456% ( 149) 00:08:57.522 7895.904 - 7948.543: 11.6247% ( 163) 00:08:57.522 7948.543 - 8001.182: 12.6881% ( 147) 00:08:57.522 8001.182 - 8053.822: 13.8383% ( 159) 00:08:57.522 8053.822 - 8106.461: 14.9306% ( 151) 00:08:57.522 8106.461 - 8159.100: 16.2326% ( 180) 00:08:57.522 8159.100 - 8211.740: 17.3828% ( 159) 00:08:57.522 8211.740 - 8264.379: 18.4172% ( 143) 00:08:57.522 8264.379 - 8317.018: 19.8495% ( 198) 00:08:57.522 8317.018 - 8369.658: 21.1806% ( 184) 00:08:57.522 8369.658 - 8422.297: 22.5694% ( 192) 00:08:57.522 8422.297 - 8474.937: 24.1175% ( 214) 00:08:57.522 8474.937 - 8527.576: 25.7957% ( 232) 00:08:57.522 8527.576 - 8580.215: 27.4667% ( 231) 00:08:57.522 8580.215 - 8632.855: 29.2245% ( 243) 00:08:57.522 8632.855 - 8685.494: 30.7219% ( 207) 00:08:57.522 8685.494 - 8738.133: 32.5666% ( 255) 00:08:57.522 8738.133 - 8790.773: 34.4256% ( 257) 00:08:57.522 8790.773 - 8843.412: 36.3064% ( 260) 00:08:57.522 8843.412 - 8896.051: 38.2089% ( 263) 00:08:57.522 8896.051 - 8948.691: 40.7046% ( 345) 00:08:57.522 8948.691 - 9001.330: 42.5709% ( 258) 00:08:57.522 9001.330 - 9053.969: 45.0810% ( 347) 00:08:57.522 9053.969 - 9106.609: 47.6635% ( 357) 00:08:57.522 9106.609 - 9159.248: 50.2387% ( 356) 00:08:57.522 9159.248 - 9211.888: 53.2118% ( 411) 00:08:57.522 9211.888 - 9264.527: 56.1704% ( 409) 00:08:57.522 9264.527 - 9317.166: 59.7222% ( 491) 00:08:57.522 9317.166 - 9369.806: 63.4332% ( 513) 00:08:57.522 9369.806 - 9422.445: 67.0211% ( 496) 00:08:57.522 9422.445 - 9475.084: 70.7031% ( 509) 00:08:57.522 9475.084 - 9527.724: 74.2766% ( 494) 00:08:57.522 9527.724 - 9580.363: 77.9297% ( 505) 00:08:57.522 9580.363 - 9633.002: 80.8521% ( 404) 00:08:57.522 9633.002 - 9685.642: 83.7457% ( 400) 00:08:57.522 9685.642 - 9738.281: 86.3643% ( 362) 00:08:57.522 9738.281 - 9790.920: 88.3102% ( 269) 00:08:57.522 9790.920 - 9843.560: 89.9016% ( 220) 00:08:57.522 9843.560 - 9896.199: 91.1892% ( 178) 00:08:57.522 9896.199 - 9948.839: 92.1803% ( 137) 00:08:57.522 9948.839 - 10001.478: 92.9181% ( 102) 00:08:57.522 10001.478 - 10054.117: 93.3955% ( 66) 00:08:57.522 10054.117 - 10106.757: 93.8296% ( 60) 00:08:57.522 10106.757 - 10159.396: 94.1840% ( 49) 00:08:57.522 10159.396 - 10212.035: 94.4227% ( 33) 00:08:57.522 10212.035 - 10264.675: 94.6108% ( 26) 00:08:57.522 10264.675 - 10317.314: 94.7772% ( 23) 00:08:57.522 10317.314 - 10369.953: 94.8640% ( 12) 00:08:57.522 10369.953 - 10422.593: 94.9219% ( 8) 00:08:57.522 10422.593 - 10475.232: 94.9942% ( 10) 00:08:57.522 10475.232 - 10527.871: 95.1027% ( 15) 00:08:57.522 10527.871 - 10580.511: 95.2619% ( 22) 00:08:57.522 10580.511 - 10633.150: 95.3197% ( 8) 00:08:57.522 10633.150 - 10685.790: 95.3848% ( 9) 00:08:57.522 10685.790 - 10738.429: 95.4716% ( 12) 00:08:57.522 10738.429 - 10791.068: 95.6236% ( 21) 00:08:57.522 10791.068 - 10843.708: 95.6959% ( 10) 00:08:57.522 10843.708 - 10896.347: 95.7682% ( 10) 00:08:57.522 10896.347 - 10948.986: 95.8406% ( 10) 00:08:57.522 10948.986 - 11001.626: 95.8767% ( 5) 00:08:57.522 11001.626 - 11054.265: 95.8984% ( 3) 00:08:57.522 11054.265 - 11106.904: 95.9274% ( 4) 00:08:57.522 11106.904 - 11159.544: 95.9491% ( 3) 00:08:57.522 11159.544 - 11212.183: 95.9563% ( 1) 00:08:57.522 11212.183 - 11264.822: 95.9780% ( 3) 00:08:57.522 11264.822 - 11317.462: 95.9925% ( 2) 00:08:57.522 11317.462 - 11370.101: 96.0142% ( 3) 00:08:57.522 11370.101 - 11422.741: 96.0576% ( 6) 00:08:57.522 11422.741 - 11475.380: 96.1155% ( 8) 00:08:57.522 11475.380 - 11528.019: 96.2167% ( 14) 00:08:57.522 11528.019 - 11580.659: 96.2963% ( 11) 00:08:57.522 11580.659 - 11633.298: 96.3686% ( 10) 00:08:57.522 11633.298 - 11685.937: 96.4193% ( 7) 00:08:57.522 11685.937 - 11738.577: 96.4916% ( 10) 00:08:57.522 11738.577 - 11791.216: 96.5278% ( 5) 00:08:57.522 11791.216 - 11843.855: 96.6291% ( 14) 00:08:57.522 11843.855 - 11896.495: 96.7086% ( 11) 00:08:57.522 11896.495 - 11949.134: 96.7954% ( 12) 00:08:57.522 11949.134 - 12001.773: 96.8822% ( 12) 00:08:57.522 12001.773 - 12054.413: 96.9618% ( 11) 00:08:57.522 12054.413 - 12107.052: 97.0269% ( 9) 00:08:57.522 12107.052 - 12159.692: 97.0992% ( 10) 00:08:57.522 12159.692 - 12212.331: 97.1571% ( 8) 00:08:57.522 12212.331 - 12264.970: 97.2295% ( 10) 00:08:57.522 12264.970 - 12317.610: 97.2801% ( 7) 00:08:57.522 12317.610 - 12370.249: 97.3018% ( 3) 00:08:57.522 12370.249 - 12422.888: 97.3235% ( 3) 00:08:57.522 12422.888 - 12475.528: 97.3452% ( 3) 00:08:57.522 12475.528 - 12528.167: 97.3524% ( 1) 00:08:57.522 12528.167 - 12580.806: 97.3814% ( 4) 00:08:57.522 12580.806 - 12633.446: 97.4103% ( 4) 00:08:57.522 12633.446 - 12686.085: 97.4826% ( 10) 00:08:57.522 12686.085 - 12738.724: 97.5260% ( 6) 00:08:57.522 12738.724 - 12791.364: 97.5767% ( 7) 00:08:57.522 12791.364 - 12844.003: 97.6418% ( 9) 00:08:57.522 12844.003 - 12896.643: 97.6852% ( 6) 00:08:57.522 13423.036 - 13475.676: 97.6924% ( 1) 00:08:57.522 13475.676 - 13580.954: 97.7358% ( 6) 00:08:57.522 13580.954 - 13686.233: 97.7792% ( 6) 00:08:57.522 13686.233 - 13791.512: 97.8516% ( 10) 00:08:57.522 13791.512 - 13896.790: 97.9167% ( 9) 00:08:57.522 13896.790 - 14002.069: 97.9745% ( 8) 00:08:57.522 14002.069 - 14107.348: 98.0324% ( 8) 00:08:57.522 14107.348 - 14212.627: 98.0613% ( 4) 00:08:57.522 14212.627 - 14317.905: 98.0975% ( 5) 00:08:57.522 14317.905 - 14423.184: 98.1192% ( 3) 00:08:57.522 14423.184 - 14528.463: 98.1481% ( 4) 00:08:57.522 16528.758 - 16634.037: 98.1554% ( 1) 00:08:57.522 16634.037 - 16739.316: 98.2133% ( 8) 00:08:57.522 16739.316 - 16844.594: 98.3796% ( 23) 00:08:57.522 16844.594 - 16949.873: 98.5315% ( 21) 00:08:57.522 16949.873 - 17055.152: 98.6039% ( 10) 00:08:57.522 17055.152 - 17160.431: 98.6111% ( 1) 00:08:57.522 17686.824 - 17792.103: 98.6617% ( 7) 00:08:57.522 17792.103 - 17897.382: 98.7413% ( 11) 00:08:57.522 17897.382 - 18002.660: 98.7630% ( 3) 00:08:57.522 18002.660 - 18107.939: 98.8064% ( 6) 00:08:57.522 18107.939 - 18213.218: 98.8571% ( 7) 00:08:57.522 18213.218 - 18318.496: 98.9077% ( 7) 00:08:57.522 18318.496 - 18423.775: 98.9583% ( 7) 00:08:57.522 18423.775 - 18529.054: 99.0162% ( 8) 00:08:57.522 18529.054 - 18634.333: 99.0885% ( 10) 00:08:57.522 18634.333 - 18739.611: 99.1392% ( 7) 00:08:57.522 18739.611 - 18844.890: 99.1681% ( 4) 00:08:57.522 18844.890 - 18950.169: 99.1970% ( 4) 00:08:57.522 18950.169 - 19055.447: 99.2260% ( 4) 00:08:57.522 19055.447 - 19160.726: 99.2549% ( 4) 00:08:57.522 19160.726 - 19266.005: 99.2839% ( 4) 00:08:57.523 19266.005 - 19371.284: 99.3056% ( 3) 00:08:57.523 19371.284 - 19476.562: 99.3345% ( 4) 00:08:57.523 19476.562 - 19581.841: 99.3634% ( 4) 00:08:57.523 19581.841 - 19687.120: 99.3924% ( 4) 00:08:57.523 19687.120 - 19792.398: 99.4213% ( 4) 00:08:57.523 19792.398 - 19897.677: 99.4502% ( 4) 00:08:57.523 19897.677 - 20002.956: 99.4792% ( 4) 00:08:57.523 20002.956 - 20108.235: 99.5081% ( 4) 00:08:57.523 20108.235 - 20213.513: 99.5370% ( 4) 00:08:57.523 26319.679 - 26424.957: 99.5660% ( 4) 00:08:57.523 26424.957 - 26530.236: 99.5949% ( 4) 00:08:57.523 26530.236 - 26635.515: 99.6238% ( 4) 00:08:57.523 26635.515 - 26740.794: 99.6528% ( 4) 00:08:57.523 26740.794 - 26846.072: 99.6817% ( 4) 00:08:57.523 26846.072 - 26951.351: 99.7106% ( 4) 00:08:57.523 26951.351 - 27161.908: 99.7613% ( 7) 00:08:57.523 27161.908 - 27372.466: 99.8192% ( 8) 00:08:57.523 27372.466 - 27583.023: 99.8698% ( 7) 00:08:57.523 27583.023 - 27793.581: 99.9277% ( 8) 00:08:57.523 27793.581 - 28004.138: 99.9855% ( 8) 00:08:57.523 28004.138 - 28214.696: 100.0000% ( 2) 00:08:57.523 00:08:57.523 10:52:44 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:57.523 00:08:57.523 real 0m2.705s 00:08:57.523 user 0m2.284s 00:08:57.523 sys 0m0.321s 00:08:57.523 10:52:44 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.523 10:52:44 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:57.523 ************************************ 00:08:57.523 END TEST nvme_perf 00:08:57.523 ************************************ 00:08:57.523 10:52:44 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:57.523 10:52:44 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:57.523 10:52:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.523 10:52:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.523 ************************************ 00:08:57.523 START TEST nvme_hello_world 00:08:57.523 ************************************ 00:08:57.523 10:52:44 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:57.782 Initializing NVMe Controllers 00:08:57.782 Attached to 0000:00:10.0 00:08:57.782 Namespace ID: 1 size: 6GB 00:08:57.782 Attached to 0000:00:11.0 00:08:57.782 Namespace ID: 1 size: 5GB 00:08:57.782 Attached to 0000:00:13.0 00:08:57.782 Namespace ID: 1 size: 1GB 00:08:57.782 Attached to 0000:00:12.0 00:08:57.782 Namespace ID: 1 size: 4GB 00:08:57.782 Namespace ID: 2 size: 4GB 00:08:57.782 Namespace ID: 3 size: 4GB 00:08:57.782 Initialization complete. 00:08:57.782 INFO: using host memory buffer for IO 00:08:57.782 Hello world! 00:08:57.782 INFO: using host memory buffer for IO 00:08:57.782 Hello world! 00:08:57.782 INFO: using host memory buffer for IO 00:08:57.782 Hello world! 00:08:57.782 INFO: using host memory buffer for IO 00:08:57.782 Hello world! 00:08:57.782 INFO: using host memory buffer for IO 00:08:57.782 Hello world! 00:08:57.782 INFO: using host memory buffer for IO 00:08:57.782 Hello world! 00:08:57.782 ************************************ 00:08:57.782 END TEST nvme_hello_world 00:08:57.782 ************************************ 00:08:57.782 00:08:57.782 real 0m0.325s 00:08:57.782 user 0m0.124s 00:08:57.782 sys 0m0.142s 00:08:57.782 10:52:44 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.782 10:52:44 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:57.782 10:52:44 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:57.782 10:52:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:57.782 10:52:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.782 10:52:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:57.782 ************************************ 00:08:57.782 START TEST nvme_sgl 00:08:57.782 ************************************ 00:08:57.782 10:52:44 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:58.045 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:58.045 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:58.045 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:58.045 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:58.045 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:58.045 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:58.045 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:58.045 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:58.045 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:58.305 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:58.305 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:58.305 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:58.306 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:58.306 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:58.306 NVMe Readv/Writev Request test 00:08:58.306 Attached to 0000:00:10.0 00:08:58.306 Attached to 0000:00:11.0 00:08:58.306 Attached to 0000:00:13.0 00:08:58.306 Attached to 0000:00:12.0 00:08:58.306 0000:00:10.0: build_io_request_2 test passed 00:08:58.306 0000:00:10.0: build_io_request_4 test passed 00:08:58.306 0000:00:10.0: build_io_request_5 test passed 00:08:58.306 0000:00:10.0: build_io_request_6 test passed 00:08:58.306 0000:00:10.0: build_io_request_7 test passed 00:08:58.306 0000:00:10.0: build_io_request_10 test passed 00:08:58.306 0000:00:11.0: build_io_request_2 test passed 00:08:58.306 0000:00:11.0: build_io_request_4 test passed 00:08:58.306 0000:00:11.0: build_io_request_5 test passed 00:08:58.306 0000:00:11.0: build_io_request_6 test passed 00:08:58.306 0000:00:11.0: build_io_request_7 test passed 00:08:58.306 0000:00:11.0: build_io_request_10 test passed 00:08:58.306 Cleaning up... 00:08:58.306 00:08:58.306 real 0m0.353s 00:08:58.306 user 0m0.172s 00:08:58.306 sys 0m0.138s 00:08:58.306 10:52:44 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.306 10:52:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:58.306 ************************************ 00:08:58.306 END TEST nvme_sgl 00:08:58.306 ************************************ 00:08:58.306 10:52:45 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:58.306 10:52:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.306 10:52:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.306 10:52:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.306 ************************************ 00:08:58.306 START TEST nvme_e2edp 00:08:58.306 ************************************ 00:08:58.306 10:52:45 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:58.565 NVMe Write/Read with End-to-End data protection test 00:08:58.565 Attached to 0000:00:10.0 00:08:58.565 Attached to 0000:00:11.0 00:08:58.565 Attached to 0000:00:13.0 00:08:58.565 Attached to 0000:00:12.0 00:08:58.565 Cleaning up... 00:08:58.565 00:08:58.565 real 0m0.307s 00:08:58.565 user 0m0.103s 00:08:58.565 sys 0m0.156s 00:08:58.565 ************************************ 00:08:58.565 END TEST nvme_e2edp 00:08:58.565 ************************************ 00:08:58.565 10:52:45 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.565 10:52:45 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 10:52:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:58.565 10:52:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.565 10:52:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.565 10:52:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.565 ************************************ 00:08:58.565 START TEST nvme_reserve 00:08:58.565 ************************************ 00:08:58.565 10:52:45 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:59.131 ===================================================== 00:08:59.131 NVMe Controller at PCI bus 0, device 16, function 0 00:08:59.131 ===================================================== 00:08:59.131 Reservations: Not Supported 00:08:59.131 ===================================================== 00:08:59.131 NVMe Controller at PCI bus 0, device 17, function 0 00:08:59.131 ===================================================== 00:08:59.131 Reservations: Not Supported 00:08:59.131 ===================================================== 00:08:59.131 NVMe Controller at PCI bus 0, device 19, function 0 00:08:59.131 ===================================================== 00:08:59.131 Reservations: Not Supported 00:08:59.131 ===================================================== 00:08:59.131 NVMe Controller at PCI bus 0, device 18, function 0 00:08:59.131 ===================================================== 00:08:59.131 Reservations: Not Supported 00:08:59.131 Reservation test passed 00:08:59.131 ************************************ 00:08:59.131 END TEST nvme_reserve 00:08:59.131 ************************************ 00:08:59.131 00:08:59.131 real 0m0.284s 00:08:59.131 user 0m0.108s 00:08:59.131 sys 0m0.132s 00:08:59.131 10:52:45 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.131 10:52:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:59.131 10:52:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:59.131 10:52:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.131 10:52:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.131 10:52:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.131 ************************************ 00:08:59.131 START TEST nvme_err_injection 00:08:59.131 ************************************ 00:08:59.131 10:52:45 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:59.389 NVMe Error Injection test 00:08:59.389 Attached to 0000:00:10.0 00:08:59.389 Attached to 0000:00:11.0 00:08:59.389 Attached to 0000:00:13.0 00:08:59.389 Attached to 0000:00:12.0 00:08:59.389 0000:00:10.0: get features failed as expected 00:08:59.389 0000:00:11.0: get features failed as expected 00:08:59.389 0000:00:13.0: get features failed as expected 00:08:59.389 0000:00:12.0: get features failed as expected 00:08:59.389 0000:00:12.0: get features successfully as expected 00:08:59.389 0000:00:10.0: get features successfully as expected 00:08:59.389 0000:00:11.0: get features successfully as expected 00:08:59.389 0000:00:13.0: get features successfully as expected 00:08:59.389 0000:00:10.0: read failed as expected 00:08:59.389 0000:00:12.0: read failed as expected 00:08:59.389 0000:00:11.0: read failed as expected 00:08:59.389 0000:00:13.0: read failed as expected 00:08:59.389 0000:00:11.0: read successfully as expected 00:08:59.389 0000:00:10.0: read successfully as expected 00:08:59.389 0000:00:13.0: read successfully as expected 00:08:59.389 0000:00:12.0: read successfully as expected 00:08:59.389 Cleaning up... 00:08:59.389 00:08:59.389 real 0m0.294s 00:08:59.389 user 0m0.123s 00:08:59.389 sys 0m0.129s 00:08:59.389 ************************************ 00:08:59.389 END TEST nvme_err_injection 00:08:59.389 ************************************ 00:08:59.389 10:52:46 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.389 10:52:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:59.389 10:52:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:59.389 10:52:46 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:59.389 10:52:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.389 10:52:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.389 ************************************ 00:08:59.389 START TEST nvme_overhead 00:08:59.389 ************************************ 00:08:59.389 10:52:46 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:00.765 Initializing NVMe Controllers 00:09:00.765 Attached to 0000:00:10.0 00:09:00.765 Attached to 0000:00:11.0 00:09:00.765 Attached to 0000:00:13.0 00:09:00.765 Attached to 0000:00:12.0 00:09:00.765 Initialization complete. Launching workers. 00:09:00.765 submit (in ns) avg, min, max = 13897.5, 10847.4, 90883.5 00:09:00.765 complete (in ns) avg, min, max = 8287.7, 7830.5, 102604.0 00:09:00.765 00:09:00.765 Submit histogram 00:09:00.765 ================ 00:09:00.765 Range in us Cumulative Count 00:09:00.765 10.847 - 10.898: 0.0489% ( 3) 00:09:00.765 10.898 - 10.949: 0.0652% ( 1) 00:09:00.765 11.052 - 11.104: 0.0815% ( 1) 00:09:00.765 11.104 - 11.155: 0.0978% ( 1) 00:09:00.765 11.155 - 11.206: 0.1141% ( 1) 00:09:00.765 11.258 - 11.309: 0.1304% ( 1) 00:09:00.765 11.361 - 11.412: 0.1631% ( 2) 00:09:00.765 11.618 - 11.669: 0.1794% ( 1) 00:09:00.765 11.720 - 11.772: 0.1957% ( 1) 00:09:00.765 12.492 - 12.543: 0.2120% ( 1) 00:09:00.765 12.903 - 12.954: 0.2446% ( 2) 00:09:00.765 12.954 - 13.006: 0.2935% ( 3) 00:09:00.765 13.006 - 13.057: 0.4565% ( 10) 00:09:00.765 13.057 - 13.108: 1.1903% ( 45) 00:09:00.765 13.108 - 13.160: 2.7882% ( 98) 00:09:00.765 13.160 - 13.263: 9.6853% ( 423) 00:09:00.765 13.263 - 13.365: 21.7349% ( 739) 00:09:00.765 13.365 - 13.468: 36.7357% ( 920) 00:09:00.765 13.468 - 13.571: 52.2094% ( 949) 00:09:00.765 13.571 - 13.674: 65.3677% ( 807) 00:09:00.765 13.674 - 13.777: 76.0476% ( 655) 00:09:00.765 13.777 - 13.880: 83.3197% ( 446) 00:09:00.765 13.880 - 13.982: 88.6189% ( 325) 00:09:00.765 13.982 - 14.085: 91.5702% ( 181) 00:09:00.765 14.085 - 14.188: 93.2985% ( 106) 00:09:00.765 14.188 - 14.291: 93.8529% ( 34) 00:09:00.765 14.291 - 14.394: 94.1301% ( 17) 00:09:00.765 14.394 - 14.496: 94.3258% ( 12) 00:09:00.765 14.496 - 14.599: 94.4236% ( 6) 00:09:00.765 14.599 - 14.702: 94.4725% ( 3) 00:09:00.765 14.702 - 14.805: 94.5051% ( 2) 00:09:00.765 14.805 - 14.908: 94.5214% ( 1) 00:09:00.765 15.010 - 15.113: 94.5377% ( 1) 00:09:00.765 15.216 - 15.319: 94.5704% ( 2) 00:09:00.765 15.319 - 15.422: 94.5867% ( 1) 00:09:00.765 15.422 - 15.524: 94.6030% ( 1) 00:09:00.765 15.524 - 15.627: 94.6356% ( 2) 00:09:00.765 15.627 - 15.730: 94.6682% ( 2) 00:09:00.765 15.730 - 15.833: 94.6845% ( 1) 00:09:00.765 16.039 - 16.141: 94.7334% ( 3) 00:09:00.765 16.141 - 16.244: 94.7497% ( 1) 00:09:00.765 16.655 - 16.758: 94.7660% ( 1) 00:09:00.765 16.758 - 16.861: 94.8149% ( 3) 00:09:00.765 16.861 - 16.964: 94.8475% ( 2) 00:09:00.765 16.964 - 17.067: 94.8802% ( 2) 00:09:00.765 17.067 - 17.169: 94.9454% ( 4) 00:09:00.765 17.169 - 17.272: 94.9617% ( 1) 00:09:00.765 17.272 - 17.375: 94.9943% ( 2) 00:09:00.765 17.375 - 17.478: 95.0595% ( 4) 00:09:00.765 17.478 - 17.581: 95.1084% ( 3) 00:09:00.765 17.581 - 17.684: 95.2715% ( 10) 00:09:00.765 17.684 - 17.786: 95.5324% ( 16) 00:09:00.765 17.786 - 17.889: 95.8096% ( 17) 00:09:00.765 17.889 - 17.992: 96.0704% ( 16) 00:09:00.766 17.992 - 18.095: 96.3476% ( 17) 00:09:00.766 18.095 - 18.198: 96.5922% ( 15) 00:09:00.766 18.198 - 18.300: 96.8205% ( 14) 00:09:00.766 18.300 - 18.403: 97.0651% ( 15) 00:09:00.766 18.403 - 18.506: 97.2281% ( 10) 00:09:00.766 18.506 - 18.609: 97.3422% ( 7) 00:09:00.766 18.609 - 18.712: 97.5053% ( 10) 00:09:00.766 18.712 - 18.814: 97.6031% ( 6) 00:09:00.766 18.814 - 18.917: 97.6847% ( 5) 00:09:00.766 18.917 - 19.020: 97.8966% ( 13) 00:09:00.766 19.020 - 19.123: 97.9945% ( 6) 00:09:00.766 19.123 - 19.226: 98.1412% ( 9) 00:09:00.766 19.226 - 19.329: 98.3043% ( 10) 00:09:00.766 19.329 - 19.431: 98.4021% ( 6) 00:09:00.766 19.431 - 19.534: 98.4673% ( 4) 00:09:00.766 19.534 - 19.637: 98.6304% ( 10) 00:09:00.766 19.637 - 19.740: 98.6630% ( 2) 00:09:00.766 19.740 - 19.843: 98.7445% ( 5) 00:09:00.766 19.843 - 19.945: 98.8097% ( 4) 00:09:00.766 19.945 - 20.048: 98.9075% ( 6) 00:09:00.766 20.048 - 20.151: 98.9728% ( 4) 00:09:00.766 20.151 - 20.254: 99.0217% ( 3) 00:09:00.766 20.254 - 20.357: 99.1358% ( 7) 00:09:00.766 20.357 - 20.459: 99.1521% ( 1) 00:09:00.766 20.562 - 20.665: 99.2173% ( 4) 00:09:00.766 20.665 - 20.768: 99.2989% ( 5) 00:09:00.766 20.768 - 20.871: 99.3152% ( 1) 00:09:00.766 20.871 - 20.973: 99.3967% ( 5) 00:09:00.766 20.973 - 21.076: 99.4456% ( 3) 00:09:00.766 21.076 - 21.179: 99.4945% ( 3) 00:09:00.766 21.488 - 21.590: 99.5108% ( 1) 00:09:00.766 21.590 - 21.693: 99.5271% ( 1) 00:09:00.766 21.693 - 21.796: 99.5598% ( 2) 00:09:00.766 21.899 - 22.002: 99.5761% ( 1) 00:09:00.766 22.104 - 22.207: 99.6087% ( 2) 00:09:00.766 22.413 - 22.516: 99.6250% ( 1) 00:09:00.766 22.824 - 22.927: 99.6576% ( 2) 00:09:00.766 23.235 - 23.338: 99.6739% ( 1) 00:09:00.766 23.338 - 23.441: 99.7065% ( 2) 00:09:00.766 23.544 - 23.647: 99.7228% ( 1) 00:09:00.766 23.749 - 23.852: 99.7391% ( 1) 00:09:00.766 23.955 - 24.058: 99.7554% ( 1) 00:09:00.766 24.263 - 24.366: 99.7717% ( 1) 00:09:00.766 24.366 - 24.469: 99.7880% ( 1) 00:09:00.766 24.983 - 25.086: 99.8043% ( 1) 00:09:00.766 25.497 - 25.600: 99.8206% ( 1) 00:09:00.766 26.114 - 26.217: 99.8369% ( 1) 00:09:00.766 26.937 - 27.142: 99.8533% ( 1) 00:09:00.766 29.815 - 30.021: 99.8696% ( 1) 00:09:00.766 30.432 - 30.638: 99.8859% ( 1) 00:09:00.766 30.843 - 31.049: 99.9022% ( 1) 00:09:00.766 44.003 - 44.209: 99.9185% ( 1) 00:09:00.766 45.031 - 45.237: 99.9348% ( 1) 00:09:00.766 46.265 - 46.471: 99.9511% ( 1) 00:09:00.766 47.910 - 48.116: 99.9674% ( 1) 00:09:00.766 81.838 - 82.249: 99.9837% ( 1) 00:09:00.766 90.474 - 90.885: 100.0000% ( 1) 00:09:00.766 00:09:00.766 Complete histogram 00:09:00.766 ================== 00:09:00.766 Range in us Cumulative Count 00:09:00.766 7.814 - 7.865: 0.1304% ( 8) 00:09:00.766 7.865 - 7.916: 1.8262% ( 104) 00:09:00.766 7.916 - 7.968: 7.3211% ( 337) 00:09:00.766 7.968 - 8.019: 21.5229% ( 871) 00:09:00.766 8.019 - 8.071: 39.1652% ( 1082) 00:09:00.766 8.071 - 8.122: 55.9922% ( 1032) 00:09:00.766 8.122 - 8.173: 69.8353% ( 849) 00:09:00.766 8.173 - 8.225: 79.5532% ( 596) 00:09:00.766 8.225 - 8.276: 85.4394% ( 361) 00:09:00.766 8.276 - 8.328: 89.0918% ( 224) 00:09:00.766 8.328 - 8.379: 90.8854% ( 110) 00:09:00.766 8.379 - 8.431: 92.1083% ( 75) 00:09:00.766 8.431 - 8.482: 93.1681% ( 65) 00:09:00.766 8.482 - 8.533: 94.1627% ( 61) 00:09:00.766 8.533 - 8.585: 95.2389% ( 66) 00:09:00.766 8.585 - 8.636: 96.2498% ( 62) 00:09:00.766 8.636 - 8.688: 96.9672% ( 44) 00:09:00.766 8.688 - 8.739: 97.2607% ( 18) 00:09:00.766 8.739 - 8.790: 97.5053% ( 15) 00:09:00.766 8.790 - 8.842: 97.7499% ( 15) 00:09:00.766 8.842 - 8.893: 97.8477% ( 6) 00:09:00.766 8.893 - 8.945: 97.9129% ( 4) 00:09:00.766 8.945 - 8.996: 97.9945% ( 5) 00:09:00.766 8.996 - 9.047: 98.0434% ( 3) 00:09:00.766 9.047 - 9.099: 98.0597% ( 1) 00:09:00.766 9.099 - 9.150: 98.0760% ( 1) 00:09:00.766 9.150 - 9.202: 98.1086% ( 2) 00:09:00.766 9.304 - 9.356: 98.1249% ( 1) 00:09:00.766 9.407 - 9.459: 98.1412% ( 1) 00:09:00.766 9.664 - 9.716: 98.1575% ( 1) 00:09:00.766 9.716 - 9.767: 98.1738% ( 1) 00:09:00.766 9.870 - 9.921: 98.1901% ( 1) 00:09:00.766 9.921 - 9.973: 98.2064% ( 1) 00:09:00.766 10.538 - 10.590: 98.2227% ( 1) 00:09:00.766 10.795 - 10.847: 98.2390% ( 1) 00:09:00.766 10.847 - 10.898: 98.2553% ( 1) 00:09:00.766 11.669 - 11.720: 98.2716% ( 1) 00:09:00.766 11.720 - 11.772: 98.2880% ( 1) 00:09:00.766 11.772 - 11.823: 98.3043% ( 1) 00:09:00.766 11.978 - 12.029: 98.3206% ( 1) 00:09:00.766 12.132 - 12.183: 98.3369% ( 1) 00:09:00.766 12.235 - 12.286: 98.3532% ( 1) 00:09:00.766 12.594 - 12.646: 98.3695% ( 1) 00:09:00.766 13.057 - 13.108: 98.4021% ( 2) 00:09:00.766 13.160 - 13.263: 98.4999% ( 6) 00:09:00.766 13.263 - 13.365: 98.5977% ( 6) 00:09:00.766 13.365 - 13.468: 98.7445% ( 9) 00:09:00.766 13.468 - 13.571: 98.9402% ( 12) 00:09:00.766 13.571 - 13.674: 99.0217% ( 5) 00:09:00.766 13.674 - 13.777: 99.1521% ( 8) 00:09:00.766 13.777 - 13.880: 99.2500% ( 6) 00:09:00.766 13.880 - 13.982: 99.3315% ( 5) 00:09:00.766 13.982 - 14.085: 99.3804% ( 3) 00:09:00.766 14.085 - 14.188: 99.4456% ( 4) 00:09:00.766 14.188 - 14.291: 99.4782% ( 2) 00:09:00.766 14.291 - 14.394: 99.4945% ( 1) 00:09:00.766 14.394 - 14.496: 99.5435% ( 3) 00:09:00.766 14.496 - 14.599: 99.5761% ( 2) 00:09:00.766 14.702 - 14.805: 99.5924% ( 1) 00:09:00.766 14.908 - 15.010: 99.6087% ( 1) 00:09:00.766 15.319 - 15.422: 99.6250% ( 1) 00:09:00.766 15.833 - 15.936: 99.6413% ( 1) 00:09:00.766 16.655 - 16.758: 99.6576% ( 1) 00:09:00.766 16.861 - 16.964: 99.6739% ( 1) 00:09:00.766 17.067 - 17.169: 99.6902% ( 1) 00:09:00.766 17.478 - 17.581: 99.7065% ( 1) 00:09:00.766 17.581 - 17.684: 99.7228% ( 1) 00:09:00.766 19.637 - 19.740: 99.7391% ( 1) 00:09:00.766 20.357 - 20.459: 99.7554% ( 1) 00:09:00.766 20.562 - 20.665: 99.7717% ( 1) 00:09:00.766 20.665 - 20.768: 99.7880% ( 1) 00:09:00.766 20.768 - 20.871: 99.8043% ( 1) 00:09:00.766 23.030 - 23.133: 99.8206% ( 1) 00:09:00.766 23.852 - 23.955: 99.8369% ( 1) 00:09:00.766 26.217 - 26.320: 99.8533% ( 1) 00:09:00.766 26.525 - 26.731: 99.8696% ( 1) 00:09:00.766 27.553 - 27.759: 99.8859% ( 1) 00:09:00.766 29.815 - 30.021: 99.9022% ( 1) 00:09:00.766 34.750 - 34.956: 99.9185% ( 1) 00:09:00.766 35.161 - 35.367: 99.9348% ( 1) 00:09:00.766 37.218 - 37.423: 99.9511% ( 1) 00:09:00.766 37.423 - 37.629: 99.9674% ( 1) 00:09:00.766 49.144 - 49.349: 99.9837% ( 1) 00:09:00.766 102.400 - 102.811: 100.0000% ( 1) 00:09:00.766 00:09:00.766 00:09:00.766 real 0m1.314s 00:09:00.766 user 0m1.101s 00:09:00.766 sys 0m0.153s 00:09:00.766 ************************************ 00:09:00.766 END TEST nvme_overhead 00:09:00.766 ************************************ 00:09:00.766 10:52:47 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.766 10:52:47 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:00.766 10:52:47 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:00.766 10:52:47 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:00.766 10:52:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.766 10:52:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.766 ************************************ 00:09:00.766 START TEST nvme_arbitration 00:09:00.766 ************************************ 00:09:00.766 10:52:47 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:04.956 Initializing NVMe Controllers 00:09:04.956 Attached to 0000:00:10.0 00:09:04.956 Attached to 0000:00:11.0 00:09:04.956 Attached to 0000:00:13.0 00:09:04.956 Attached to 0000:00:12.0 00:09:04.956 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:04.956 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:04.956 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:04.956 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:04.956 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:04.956 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:04.956 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:04.956 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:04.956 Initialization complete. Launching workers. 00:09:04.956 Starting thread on core 1 with urgent priority queue 00:09:04.956 Starting thread on core 2 with urgent priority queue 00:09:04.956 Starting thread on core 3 with urgent priority queue 00:09:04.956 Starting thread on core 0 with urgent priority queue 00:09:04.956 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:09:04.956 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:09:04.956 QEMU NVMe Ctrl (12341 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:09:04.956 QEMU NVMe Ctrl (12342 ) core 1: 533.33 IO/s 187.50 secs/100000 ios 00:09:04.956 QEMU NVMe Ctrl (12343 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:09:04.956 QEMU NVMe Ctrl (12342 ) core 3: 597.33 IO/s 167.41 secs/100000 ios 00:09:04.956 ======================================================== 00:09:04.956 00:09:04.956 ************************************ 00:09:04.956 END TEST nvme_arbitration 00:09:04.956 ************************************ 00:09:04.956 00:09:04.956 real 0m3.475s 00:09:04.956 user 0m9.474s 00:09:04.956 sys 0m0.178s 00:09:04.956 10:52:50 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.956 10:52:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:04.956 10:52:51 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:04.956 10:52:51 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.956 10:52:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.956 10:52:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.956 ************************************ 00:09:04.956 START TEST nvme_single_aen 00:09:04.956 ************************************ 00:09:04.956 10:52:51 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:04.956 Asynchronous Event Request test 00:09:04.956 Attached to 0000:00:10.0 00:09:04.956 Attached to 0000:00:11.0 00:09:04.956 Attached to 0000:00:13.0 00:09:04.956 Attached to 0000:00:12.0 00:09:04.956 Reset controller to setup AER completions for this process 00:09:04.956 Registering asynchronous event callbacks... 00:09:04.956 Getting orig temperature thresholds of all controllers 00:09:04.956 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.956 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.956 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.956 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:04.956 Setting all controllers temperature threshold low to trigger AER 00:09:04.956 Waiting for all controllers temperature threshold to be set lower 00:09:04.956 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.956 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:04.956 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.956 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:04.957 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.957 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:04.957 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:04.957 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:04.957 Waiting for all controllers to trigger AER and reset threshold 00:09:04.957 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.957 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.957 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.957 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:04.957 Cleaning up... 00:09:04.957 00:09:04.957 real 0m0.311s 00:09:04.957 user 0m0.108s 00:09:04.957 sys 0m0.145s 00:09:04.957 ************************************ 00:09:04.957 END TEST nvme_single_aen 00:09:04.957 ************************************ 00:09:04.957 10:52:51 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.957 10:52:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:04.957 10:52:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:04.957 10:52:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:04.957 10:52:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.957 10:52:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:04.957 ************************************ 00:09:04.957 START TEST nvme_doorbell_aers 00:09:04.957 ************************************ 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:04.957 10:52:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:05.215 [2024-11-15 10:52:51.884163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:15.193 Executing: test_write_invalid_db 00:09:15.193 Waiting for AER completion... 00:09:15.193 Failure: test_write_invalid_db 00:09:15.193 00:09:15.193 Executing: test_invalid_db_write_overflow_sq 00:09:15.193 Waiting for AER completion... 00:09:15.193 Failure: test_invalid_db_write_overflow_sq 00:09:15.193 00:09:15.193 Executing: test_invalid_db_write_overflow_cq 00:09:15.193 Waiting for AER completion... 00:09:15.193 Failure: test_invalid_db_write_overflow_cq 00:09:15.193 00:09:15.193 10:53:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:15.193 10:53:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:15.193 [2024-11-15 10:53:01.943738] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:25.166 Executing: test_write_invalid_db 00:09:25.166 Waiting for AER completion... 00:09:25.166 Failure: test_write_invalid_db 00:09:25.166 00:09:25.166 Executing: test_invalid_db_write_overflow_sq 00:09:25.166 Waiting for AER completion... 00:09:25.166 Failure: test_invalid_db_write_overflow_sq 00:09:25.166 00:09:25.166 Executing: test_invalid_db_write_overflow_cq 00:09:25.166 Waiting for AER completion... 00:09:25.166 Failure: test_invalid_db_write_overflow_cq 00:09:25.166 00:09:25.166 10:53:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:25.166 10:53:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:25.166 [2024-11-15 10:53:11.980565] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:35.267 Executing: test_write_invalid_db 00:09:35.267 Waiting for AER completion... 00:09:35.267 Failure: test_write_invalid_db 00:09:35.267 00:09:35.267 Executing: test_invalid_db_write_overflow_sq 00:09:35.267 Waiting for AER completion... 00:09:35.267 Failure: test_invalid_db_write_overflow_sq 00:09:35.267 00:09:35.267 Executing: test_invalid_db_write_overflow_cq 00:09:35.267 Waiting for AER completion... 00:09:35.267 Failure: test_invalid_db_write_overflow_cq 00:09:35.267 00:09:35.267 10:53:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:35.267 10:53:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:35.268 [2024-11-15 10:53:22.032013] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.265 Executing: test_write_invalid_db 00:09:45.265 Waiting for AER completion... 00:09:45.265 Failure: test_write_invalid_db 00:09:45.265 00:09:45.265 Executing: test_invalid_db_write_overflow_sq 00:09:45.266 Waiting for AER completion... 00:09:45.266 Failure: test_invalid_db_write_overflow_sq 00:09:45.266 00:09:45.266 Executing: test_invalid_db_write_overflow_cq 00:09:45.266 Waiting for AER completion... 00:09:45.266 Failure: test_invalid_db_write_overflow_cq 00:09:45.266 00:09:45.266 00:09:45.266 real 0m40.330s 00:09:45.266 user 0m28.528s 00:09:45.266 sys 0m11.413s 00:09:45.266 10:53:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.266 ************************************ 00:09:45.266 END TEST nvme_doorbell_aers 00:09:45.266 ************************************ 00:09:45.266 10:53:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:45.266 10:53:31 nvme -- nvme/nvme.sh@97 -- # uname 00:09:45.266 10:53:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:45.266 10:53:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:45.266 10:53:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:45.266 10:53:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.266 10:53:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.266 ************************************ 00:09:45.266 START TEST nvme_multi_aen 00:09:45.266 ************************************ 00:09:45.266 10:53:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:45.563 [2024-11-15 10:53:32.128438] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.128550] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.128573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.130201] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.130239] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.130256] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.131606] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.131644] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.131658] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.133027] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.133067] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 [2024-11-15 10:53:32.133081] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64512) is not found. Dropping the request. 00:09:45.563 Child process pid: 65028 00:09:45.852 [Child] Asynchronous Event Request test 00:09:45.852 [Child] Attached to 0000:00:10.0 00:09:45.852 [Child] Attached to 0000:00:11.0 00:09:45.852 [Child] Attached to 0000:00:13.0 00:09:45.852 [Child] Attached to 0000:00:12.0 00:09:45.852 [Child] Registering asynchronous event callbacks... 00:09:45.852 [Child] Getting orig temperature thresholds of all controllers 00:09:45.853 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:45.853 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 [Child] Cleaning up... 00:09:45.853 Asynchronous Event Request test 00:09:45.853 Attached to 0000:00:10.0 00:09:45.853 Attached to 0000:00:11.0 00:09:45.853 Attached to 0000:00:13.0 00:09:45.853 Attached to 0000:00:12.0 00:09:45.853 Reset controller to setup AER completions for this process 00:09:45.853 Registering asynchronous event callbacks... 00:09:45.853 Getting orig temperature thresholds of all controllers 00:09:45.853 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:45.853 Setting all controllers temperature threshold low to trigger AER 00:09:45.853 Waiting for all controllers temperature threshold to be set lower 00:09:45.853 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:45.853 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:45.853 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:45.853 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:45.853 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:45.853 Waiting for all controllers to trigger AER and reset threshold 00:09:45.853 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:45.853 Cleaning up... 00:09:45.853 ************************************ 00:09:45.853 END TEST nvme_multi_aen 00:09:45.853 ************************************ 00:09:45.853 00:09:45.853 real 0m0.631s 00:09:45.853 user 0m0.207s 00:09:45.853 sys 0m0.313s 00:09:45.853 10:53:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.853 10:53:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:45.853 10:53:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:45.853 10:53:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.853 10:53:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.853 10:53:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.853 ************************************ 00:09:45.853 START TEST nvme_startup 00:09:45.853 ************************************ 00:09:45.853 10:53:32 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:46.142 Initializing NVMe Controllers 00:09:46.142 Attached to 0000:00:10.0 00:09:46.142 Attached to 0000:00:11.0 00:09:46.142 Attached to 0000:00:13.0 00:09:46.142 Attached to 0000:00:12.0 00:09:46.142 Initialization complete. 00:09:46.142 Time used:198722.656 (us). 00:09:46.142 00:09:46.142 real 0m0.300s 00:09:46.142 user 0m0.104s 00:09:46.142 sys 0m0.149s 00:09:46.142 10:53:32 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.143 ************************************ 00:09:46.143 END TEST nvme_startup 00:09:46.143 ************************************ 00:09:46.143 10:53:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:46.143 10:53:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:46.143 10:53:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:46.143 10:53:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.143 10:53:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:46.143 ************************************ 00:09:46.143 START TEST nvme_multi_secondary 00:09:46.143 ************************************ 00:09:46.143 10:53:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:46.143 10:53:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65084 00:09:46.143 10:53:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:46.143 10:53:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65085 00:09:46.143 10:53:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:46.143 10:53:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:50.340 Initializing NVMe Controllers 00:09:50.340 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:50.340 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:50.340 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:50.340 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:50.340 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:50.340 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:50.340 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:50.340 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:50.340 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:50.340 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:50.340 Initialization complete. Launching workers. 00:09:50.340 ======================================================== 00:09:50.340 Latency(us) 00:09:50.340 Device Information : IOPS MiB/s Average min max 00:09:50.340 PCIE (0000:00:10.0) NSID 1 from core 1: 4976.77 19.44 3212.53 1493.04 7549.45 00:09:50.340 PCIE (0000:00:11.0) NSID 1 from core 1: 4976.77 19.44 3214.45 1440.15 7407.64 00:09:50.340 PCIE (0000:00:13.0) NSID 1 from core 1: 4976.77 19.44 3214.79 1613.41 7241.80 00:09:50.340 PCIE (0000:00:12.0) NSID 1 from core 1: 4976.77 19.44 3215.00 1442.84 6871.37 00:09:50.340 PCIE (0000:00:12.0) NSID 2 from core 1: 4976.77 19.44 3215.27 1584.20 7880.81 00:09:50.340 PCIE (0000:00:12.0) NSID 3 from core 1: 4976.77 19.44 3215.38 1572.88 7690.89 00:09:50.340 ======================================================== 00:09:50.340 Total : 29860.62 116.64 3214.57 1440.15 7880.81 00:09:50.340 00:09:50.340 Initializing NVMe Controllers 00:09:50.340 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:50.340 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:50.340 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:50.340 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:50.340 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:50.340 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:50.340 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:50.340 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:50.340 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:50.340 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:50.340 Initialization complete. Launching workers. 00:09:50.340 ======================================================== 00:09:50.340 Latency(us) 00:09:50.340 Device Information : IOPS MiB/s Average min max 00:09:50.340 PCIE (0000:00:10.0) NSID 1 from core 2: 3112.79 12.16 5137.96 1298.37 13032.80 00:09:50.340 PCIE (0000:00:11.0) NSID 1 from core 2: 3112.79 12.16 5135.87 1126.67 12668.53 00:09:50.340 PCIE (0000:00:13.0) NSID 1 from core 2: 3112.79 12.16 5132.73 1219.82 13522.86 00:09:50.340 PCIE (0000:00:12.0) NSID 1 from core 2: 3112.79 12.16 5132.38 1243.73 12297.53 00:09:50.340 PCIE (0000:00:12.0) NSID 2 from core 2: 3112.79 12.16 5132.65 1352.46 12251.77 00:09:50.340 PCIE (0000:00:12.0) NSID 3 from core 2: 3112.79 12.16 5132.51 1338.02 12644.61 00:09:50.340 ======================================================== 00:09:50.340 Total : 18676.76 72.96 5134.02 1126.67 13522.86 00:09:50.340 00:09:50.340 10:53:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65084 00:09:51.718 Initializing NVMe Controllers 00:09:51.718 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.718 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.718 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.718 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.718 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:51.718 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:51.718 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:51.718 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:51.718 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:51.718 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:51.718 Initialization complete. Launching workers. 00:09:51.718 ======================================================== 00:09:51.718 Latency(us) 00:09:51.718 Device Information : IOPS MiB/s Average min max 00:09:51.718 PCIE (0000:00:10.0) NSID 1 from core 0: 8513.42 33.26 1877.91 874.63 6939.30 00:09:51.718 PCIE (0000:00:11.0) NSID 1 from core 0: 8513.42 33.26 1878.94 902.66 7337.72 00:09:51.718 PCIE (0000:00:13.0) NSID 1 from core 0: 8513.42 33.26 1878.92 907.69 6917.74 00:09:51.718 PCIE (0000:00:12.0) NSID 1 from core 0: 8513.42 33.26 1878.88 857.88 6517.78 00:09:51.718 PCIE (0000:00:12.0) NSID 2 from core 0: 8513.42 33.26 1878.85 797.16 7187.84 00:09:51.718 PCIE (0000:00:12.0) NSID 3 from core 0: 8516.62 33.27 1878.13 772.68 6823.28 00:09:51.718 ======================================================== 00:09:51.718 Total : 51083.71 199.55 1878.60 772.68 7337.72 00:09:51.718 00:09:51.718 10:53:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65085 00:09:51.718 10:53:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65154 00:09:51.718 10:53:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:51.718 10:53:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65155 00:09:51.718 10:53:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:51.718 10:53:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:55.006 Initializing NVMe Controllers 00:09:55.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:55.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:55.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:55.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:55.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:55.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:55.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:55.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:55.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:55.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:55.006 Initialization complete. Launching workers. 00:09:55.006 ======================================================== 00:09:55.006 Latency(us) 00:09:55.006 Device Information : IOPS MiB/s Average min max 00:09:55.006 PCIE (0000:00:10.0) NSID 1 from core 1: 5155.07 20.14 3101.47 1002.05 5784.23 00:09:55.006 PCIE (0000:00:11.0) NSID 1 from core 1: 5155.07 20.14 3103.27 1030.20 5715.15 00:09:55.006 PCIE (0000:00:13.0) NSID 1 from core 1: 5155.07 20.14 3103.46 1027.48 6037.35 00:09:55.006 PCIE (0000:00:12.0) NSID 1 from core 1: 5155.07 20.14 3103.71 1019.09 6573.88 00:09:55.006 PCIE (0000:00:12.0) NSID 2 from core 1: 5155.07 20.14 3103.81 1034.91 7115.55 00:09:55.006 PCIE (0000:00:12.0) NSID 3 from core 1: 5160.40 20.16 3100.83 1018.41 5870.67 00:09:55.006 ======================================================== 00:09:55.006 Total : 30935.75 120.84 3102.76 1002.05 7115.55 00:09:55.006 00:09:55.006 Initializing NVMe Controllers 00:09:55.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:55.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:55.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:55.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:55.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:55.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:55.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:55.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:55.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:55.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:55.006 Initialization complete. Launching workers. 00:09:55.006 ======================================================== 00:09:55.006 Latency(us) 00:09:55.006 Device Information : IOPS MiB/s Average min max 00:09:55.006 PCIE (0000:00:10.0) NSID 1 from core 0: 5290.28 20.67 3022.00 968.64 5947.27 00:09:55.006 PCIE (0000:00:11.0) NSID 1 from core 0: 5290.28 20.67 3023.66 979.25 5849.36 00:09:55.006 PCIE (0000:00:13.0) NSID 1 from core 0: 5290.28 20.67 3023.58 913.43 5654.60 00:09:55.006 PCIE (0000:00:12.0) NSID 1 from core 0: 5290.28 20.67 3023.48 875.88 5643.85 00:09:55.006 PCIE (0000:00:12.0) NSID 2 from core 0: 5290.28 20.67 3023.43 824.91 5449.55 00:09:55.006 PCIE (0000:00:12.0) NSID 3 from core 0: 5290.28 20.67 3023.35 800.50 5836.87 00:09:55.006 ======================================================== 00:09:55.006 Total : 31741.70 123.99 3023.25 800.50 5947.27 00:09:55.006 00:09:56.910 Initializing NVMe Controllers 00:09:56.910 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:56.910 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:56.910 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:56.910 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:56.910 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:56.910 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:56.910 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:56.910 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:56.910 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:56.910 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:56.910 Initialization complete. Launching workers. 00:09:56.910 ======================================================== 00:09:56.910 Latency(us) 00:09:56.910 Device Information : IOPS MiB/s Average min max 00:09:56.910 PCIE (0000:00:10.0) NSID 1 from core 2: 3279.46 12.81 4877.70 1080.88 12549.15 00:09:56.910 PCIE (0000:00:11.0) NSID 1 from core 2: 3279.46 12.81 4877.92 1086.96 13186.67 00:09:56.910 PCIE (0000:00:13.0) NSID 1 from core 2: 3279.46 12.81 4878.59 1076.97 11879.46 00:09:56.910 PCIE (0000:00:12.0) NSID 1 from core 2: 3282.66 12.82 4872.80 1080.02 11995.33 00:09:56.910 PCIE (0000:00:12.0) NSID 2 from core 2: 3282.66 12.82 4873.72 1087.24 11459.79 00:09:56.910 PCIE (0000:00:12.0) NSID 3 from core 2: 3282.66 12.82 4873.16 1060.26 12510.37 00:09:56.911 ======================================================== 00:09:56.911 Total : 19686.34 76.90 4875.65 1060.26 13186.67 00:09:56.911 00:09:56.911 ************************************ 00:09:56.911 END TEST nvme_multi_secondary 00:09:56.911 ************************************ 00:09:56.911 10:53:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65154 00:09:56.911 10:53:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65155 00:09:56.911 00:09:56.911 real 0m10.705s 00:09:56.911 user 0m18.620s 00:09:56.911 sys 0m0.989s 00:09:56.911 10:53:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.911 10:53:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:56.911 10:53:43 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:56.911 10:53:43 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:56.911 10:53:43 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64092 ]] 00:09:56.911 10:53:43 nvme -- common/autotest_common.sh@1094 -- # kill 64092 00:09:56.911 10:53:43 nvme -- common/autotest_common.sh@1095 -- # wait 64092 00:09:56.911 [2024-11-15 10:53:43.715208] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.715737] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.715831] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.715884] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.722992] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.723274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.723312] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.723344] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.727544] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.727610] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.727639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.727671] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.732026] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.732103] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.732132] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:56.911 [2024-11-15 10:53:43.732163] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:57.170 10:53:43 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:57.170 10:53:43 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:57.170 10:53:43 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:57.170 10:53:43 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.170 10:53:43 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.170 10:53:43 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.170 ************************************ 00:09:57.170 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:57.170 ************************************ 00:09:57.170 10:53:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:57.429 * Looking for test storage... 00:09:57.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:57.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.429 --rc genhtml_branch_coverage=1 00:09:57.429 --rc genhtml_function_coverage=1 00:09:57.429 --rc genhtml_legend=1 00:09:57.429 --rc geninfo_all_blocks=1 00:09:57.429 --rc geninfo_unexecuted_blocks=1 00:09:57.429 00:09:57.429 ' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:57.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.429 --rc genhtml_branch_coverage=1 00:09:57.429 --rc genhtml_function_coverage=1 00:09:57.429 --rc genhtml_legend=1 00:09:57.429 --rc geninfo_all_blocks=1 00:09:57.429 --rc geninfo_unexecuted_blocks=1 00:09:57.429 00:09:57.429 ' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:57.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.429 --rc genhtml_branch_coverage=1 00:09:57.429 --rc genhtml_function_coverage=1 00:09:57.429 --rc genhtml_legend=1 00:09:57.429 --rc geninfo_all_blocks=1 00:09:57.429 --rc geninfo_unexecuted_blocks=1 00:09:57.429 00:09:57.429 ' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:57.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.429 --rc genhtml_branch_coverage=1 00:09:57.429 --rc genhtml_function_coverage=1 00:09:57.429 --rc genhtml_legend=1 00:09:57.429 --rc geninfo_all_blocks=1 00:09:57.429 --rc geninfo_unexecuted_blocks=1 00:09:57.429 00:09:57.429 ' 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:57.429 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:57.430 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65321 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65321 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65321 ']' 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.726 10:53:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:57.726 [2024-11-15 10:53:44.416564] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:09:57.726 [2024-11-15 10:53:44.417312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65321 ] 00:09:58.009 [2024-11-15 10:53:44.620431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.009 [2024-11-15 10:53:44.740084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.009 [2024-11-15 10:53:44.740316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.009 [2024-11-15 10:53:44.740457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.009 [2024-11-15 10:53:44.740482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 nvme0n1 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_tl17K.txt 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:58.946 true 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731668025 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65344 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:58.946 10:53:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:01.481 [2024-11-15 10:53:47.743432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:01.481 [2024-11-15 10:53:47.743973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:01.481 [2024-11-15 10:53:47.744128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:01.481 [2024-11-15 10:53:47.744239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:01.481 [2024-11-15 10:53:47.746741] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65344 00:10:01.481 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65344 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65344 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_tl17K.txt 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_tl17K.txt 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65321 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65321 ']' 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65321 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65321 00:10:01.481 killing process with pid 65321 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65321' 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65321 00:10:01.481 10:53:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65321 00:10:04.011 10:53:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:04.011 10:53:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:04.011 00:10:04.011 real 0m6.429s 00:10:04.011 user 0m22.284s 00:10:04.011 sys 0m0.808s 00:10:04.011 10:53:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.011 ************************************ 00:10:04.011 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:04.011 ************************************ 00:10:04.011 10:53:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:04.011 10:53:50 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:04.011 10:53:50 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:04.011 10:53:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.011 10:53:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.011 10:53:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.011 ************************************ 00:10:04.011 START TEST nvme_fio 00:10:04.011 ************************************ 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:04.011 10:53:50 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:04.011 10:53:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:04.271 10:53:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:04.271 10:53:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:04.271 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:04.530 10:53:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:04.530 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:04.530 fio-3.35 00:10:04.530 Starting 1 thread 00:10:08.721 00:10:08.721 test: (groupid=0, jobs=1): err= 0: pid=65496: Fri Nov 15 10:53:54 2024 00:10:08.721 read: IOPS=22.8k, BW=89.1MiB/s (93.5MB/s)(178MiB/2001msec) 00:10:08.721 slat (usec): min=3, max=559, avg= 4.47, stdev= 2.97 00:10:08.721 clat (usec): min=276, max=10763, avg=2796.73, stdev=442.83 00:10:08.721 lat (usec): min=281, max=10806, avg=2801.20, stdev=443.45 00:10:08.721 clat percentiles (usec): 00:10:08.721 | 1.00th=[ 2180], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2638], 00:10:08.721 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:10:08.721 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 3228], 00:10:08.721 | 99.00th=[ 4490], 99.50th=[ 5342], 99.90th=[ 8356], 99.95th=[ 8717], 00:10:08.721 | 99.99th=[10552] 00:10:08.721 bw ( KiB/s): min=90088, max=90584, per=98.90%, avg=90280.00, stdev=266.29, samples=3 00:10:08.721 iops : min=22522, max=22646, avg=22570.00, stdev=66.57, samples=3 00:10:08.721 write: IOPS=22.7k, BW=88.6MiB/s (92.9MB/s)(177MiB/2001msec); 0 zone resets 00:10:08.721 slat (usec): min=3, max=293, avg= 4.72, stdev= 2.27 00:10:08.721 clat (usec): min=184, max=10675, avg=2803.62, stdev=444.77 00:10:08.721 lat (usec): min=189, max=10693, avg=2808.34, stdev=445.32 00:10:08.721 clat percentiles (usec): 00:10:08.721 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:10:08.721 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:10:08.721 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2933], 95.00th=[ 3261], 00:10:08.721 | 99.00th=[ 4424], 99.50th=[ 5342], 99.90th=[ 8455], 99.95th=[ 8979], 00:10:08.721 | 99.99th=[10159] 00:10:08.721 bw ( KiB/s): min=89576, max=92248, per=99.77%, avg=90525.33, stdev=1494.47, samples=3 00:10:08.721 iops : min=22394, max=23062, avg=22631.33, stdev=373.62, samples=3 00:10:08.721 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:08.721 lat (msec) : 2=0.68%, 4=97.81%, 10=1.46%, 20=0.02% 00:10:08.721 cpu : usr=98.70%, sys=0.35%, ctx=32, majf=0, minf=607 00:10:08.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:08.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.721 issued rwts: total=45663,45388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.721 00:10:08.721 Run status group 0 (all jobs): 00:10:08.721 READ: bw=89.1MiB/s (93.5MB/s), 89.1MiB/s-89.1MiB/s (93.5MB/s-93.5MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:08.721 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=177MiB (186MB), run=2001-2001msec 00:10:08.721 ----------------------------------------------------- 00:10:08.721 Suppressions used: 00:10:08.721 count bytes template 00:10:08.721 1 32 /usr/src/fio/parse.c 00:10:08.721 1 8 libtcmalloc_minimal.so 00:10:08.721 ----------------------------------------------------- 00:10:08.721 00:10:08.721 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:08.721 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:08.721 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:08.721 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:08.721 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:08.721 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:08.979 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:08.979 10:53:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:08.979 10:53:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:09.238 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:09.238 fio-3.35 00:10:09.238 Starting 1 thread 00:10:13.425 00:10:13.425 test: (groupid=0, jobs=1): err= 0: pid=65562: Fri Nov 15 10:53:59 2024 00:10:13.425 read: IOPS=22.7k, BW=88.7MiB/s (93.0MB/s)(178MiB/2001msec) 00:10:13.425 slat (nsec): min=3764, max=49835, avg=4478.45, stdev=1093.97 00:10:13.425 clat (usec): min=288, max=12690, avg=2810.28, stdev=314.55 00:10:13.425 lat (usec): min=294, max=12740, avg=2814.75, stdev=314.93 00:10:13.425 clat percentiles (usec): 00:10:13.425 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2704], 00:10:13.425 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:13.425 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2999], 00:10:13.425 | 99.00th=[ 3589], 99.50th=[ 3949], 99.90th=[ 7767], 99.95th=[10421], 00:10:13.425 | 99.99th=[12518] 00:10:13.425 bw ( KiB/s): min=88208, max=91200, per=99.22%, avg=90141.33, stdev=1676.84, samples=3 00:10:13.425 iops : min=22054, max=22800, avg=22536.00, stdev=418.06, samples=3 00:10:13.425 write: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(176MiB/2001msec); 0 zone resets 00:10:13.425 slat (nsec): min=3896, max=41796, avg=4770.97, stdev=1129.70 00:10:13.425 clat (usec): min=261, max=12602, avg=2816.60, stdev=326.31 00:10:13.425 lat (usec): min=267, max=12621, avg=2821.37, stdev=326.68 00:10:13.425 clat percentiles (usec): 00:10:13.425 | 1.00th=[ 2540], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2704], 00:10:13.425 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:10:13.425 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:10:13.425 | 99.00th=[ 3589], 99.50th=[ 3982], 99.90th=[ 8356], 99.95th=[10814], 00:10:13.425 | 99.99th=[12387] 00:10:13.425 bw ( KiB/s): min=87648, max=92576, per=100.00%, avg=90376.00, stdev=2506.07, samples=3 00:10:13.425 iops : min=21912, max=23144, avg=22594.00, stdev=626.52, samples=3 00:10:13.425 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:13.425 lat (msec) : 2=0.04%, 4=99.49%, 10=0.39%, 20=0.06% 00:10:13.425 cpu : usr=99.50%, sys=0.00%, ctx=3, majf=0, minf=607 00:10:13.425 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:13.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.425 issued rwts: total=45449,45172,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.425 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.425 00:10:13.425 Run status group 0 (all jobs): 00:10:13.425 READ: bw=88.7MiB/s (93.0MB/s), 88.7MiB/s-88.7MiB/s (93.0MB/s-93.0MB/s), io=178MiB (186MB), run=2001-2001msec 00:10:13.425 WRITE: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=176MiB (185MB), run=2001-2001msec 00:10:13.425 ----------------------------------------------------- 00:10:13.425 Suppressions used: 00:10:13.425 count bytes template 00:10:13.425 1 32 /usr/src/fio/parse.c 00:10:13.425 1 8 libtcmalloc_minimal.so 00:10:13.425 ----------------------------------------------------- 00:10:13.425 00:10:13.425 10:53:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:13.425 10:53:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:13.425 10:53:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:13.425 10:53:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:13.425 10:54:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:13.425 10:54:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:13.684 10:54:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:13.684 10:54:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:13.684 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:13.685 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:13.685 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:13.685 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:13.685 10:54:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:13.943 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:13.943 fio-3.35 00:10:13.943 Starting 1 thread 00:10:18.132 00:10:18.132 test: (groupid=0, jobs=1): err= 0: pid=65627: Fri Nov 15 10:54:04 2024 00:10:18.132 read: IOPS=22.1k, BW=86.5MiB/s (90.7MB/s)(173MiB/2001msec) 00:10:18.132 slat (usec): min=3, max=180, avg= 4.57, stdev= 1.43 00:10:18.132 clat (usec): min=211, max=11783, avg=2883.90, stdev=413.13 00:10:18.132 lat (usec): min=216, max=11834, avg=2888.47, stdev=413.68 00:10:18.132 clat percentiles (usec): 00:10:18.132 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:18.132 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:18.132 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:10:18.132 | 99.00th=[ 4293], 99.50th=[ 5407], 99.90th=[ 8291], 99.95th=[ 9372], 00:10:18.132 | 99.99th=[11469] 00:10:18.132 bw ( KiB/s): min=86424, max=90064, per=99.04%, avg=87730.67, stdev=2025.57, samples=3 00:10:18.132 iops : min=21606, max=22516, avg=21932.67, stdev=506.39, samples=3 00:10:18.132 write: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(172MiB/2001msec); 0 zone resets 00:10:18.132 slat (nsec): min=3846, max=38233, avg=4776.43, stdev=1129.41 00:10:18.132 clat (usec): min=187, max=11588, avg=2888.99, stdev=425.08 00:10:18.132 lat (usec): min=191, max=11609, avg=2893.77, stdev=425.53 00:10:18.132 clat percentiles (usec): 00:10:18.132 | 1.00th=[ 2147], 5.00th=[ 2671], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:18.132 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2868], 00:10:18.132 | 70.00th=[ 2900], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3064], 00:10:18.132 | 99.00th=[ 4293], 99.50th=[ 5473], 99.90th=[ 8356], 99.95th=[ 9634], 00:10:18.132 | 99.99th=[11338] 00:10:18.132 bw ( KiB/s): min=86312, max=90976, per=99.91%, avg=87909.33, stdev=2656.58, samples=3 00:10:18.132 iops : min=21578, max=22744, avg=21977.33, stdev=664.15, samples=3 00:10:18.132 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02% 00:10:18.132 lat (msec) : 2=0.72%, 4=97.97%, 10=1.21%, 20=0.04% 00:10:18.132 cpu : usr=99.20%, sys=0.25%, ctx=5, majf=0, minf=607 00:10:18.132 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.132 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.132 issued rwts: total=44314,44017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.132 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.132 00:10:18.132 Run status group 0 (all jobs): 00:10:18.132 READ: bw=86.5MiB/s (90.7MB/s), 86.5MiB/s-86.5MiB/s (90.7MB/s-90.7MB/s), io=173MiB (182MB), run=2001-2001msec 00:10:18.132 WRITE: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:18.132 ----------------------------------------------------- 00:10:18.132 Suppressions used: 00:10:18.132 count bytes template 00:10:18.132 1 32 /usr/src/fio/parse.c 00:10:18.132 1 8 libtcmalloc_minimal.so 00:10:18.132 ----------------------------------------------------- 00:10:18.132 00:10:18.132 10:54:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.132 10:54:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:18.132 10:54:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.132 10:54:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:18.132 10:54:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:18.132 10:54:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:18.391 10:54:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:18.391 10:54:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:18.391 10:54:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:18.649 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:18.649 fio-3.35 00:10:18.649 Starting 1 thread 00:10:23.918 00:10:23.918 test: (groupid=0, jobs=1): err= 0: pid=65689: Fri Nov 15 10:54:10 2024 00:10:23.918 read: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(172MiB/2001msec) 00:10:23.918 slat (nsec): min=3743, max=56959, avg=4664.23, stdev=1111.42 00:10:23.918 clat (usec): min=222, max=11221, avg=2902.53, stdev=266.18 00:10:23.918 lat (usec): min=227, max=11278, avg=2907.20, stdev=266.50 00:10:23.918 clat percentiles (usec): 00:10:23.918 | 1.00th=[ 2606], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2769], 00:10:23.918 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:10:23.918 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3130], 00:10:23.918 | 99.00th=[ 3326], 99.50th=[ 3752], 99.90th=[ 6390], 99.95th=[ 8291], 00:10:23.918 | 99.99th=[10945] 00:10:23.918 bw ( KiB/s): min=86720, max=88800, per=99.69%, avg=87722.67, stdev=1042.01, samples=3 00:10:23.919 iops : min=21680, max=22200, avg=21930.67, stdev=260.50, samples=3 00:10:23.919 write: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec); 0 zone resets 00:10:23.919 slat (nsec): min=3917, max=35978, avg=4853.92, stdev=1035.59 00:10:23.919 clat (usec): min=203, max=11049, avg=2908.17, stdev=273.37 00:10:23.919 lat (usec): min=208, max=11066, avg=2913.02, stdev=273.65 00:10:23.919 clat percentiles (usec): 00:10:23.919 | 1.00th=[ 2606], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:10:23.919 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:10:23.919 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3130], 00:10:23.919 | 99.00th=[ 3359], 99.50th=[ 4113], 99.90th=[ 6587], 99.95th=[ 8848], 00:10:23.919 | 99.99th=[10683] 00:10:23.919 bw ( KiB/s): min=86320, max=88816, per=100.00%, avg=87888.00, stdev=1365.54, samples=3 00:10:23.919 iops : min=21580, max=22204, avg=21972.00, stdev=341.39, samples=3 00:10:23.919 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:23.919 lat (msec) : 2=0.05%, 4=99.42%, 10=0.47%, 20=0.03% 00:10:23.919 cpu : usr=99.35%, sys=0.10%, ctx=4, majf=0, minf=605 00:10:23.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:23.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:23.919 issued rwts: total=44018,43739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:23.919 00:10:23.919 Run status group 0 (all jobs): 00:10:23.919 READ: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:23.919 WRITE: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:10:23.919 ----------------------------------------------------- 00:10:23.919 Suppressions used: 00:10:23.919 count bytes template 00:10:23.919 1 32 /usr/src/fio/parse.c 00:10:23.919 1 8 libtcmalloc_minimal.so 00:10:23.919 ----------------------------------------------------- 00:10:23.919 00:10:23.919 10:54:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:23.919 10:54:10 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:23.919 00:10:23.919 real 0m19.985s 00:10:23.919 user 0m14.903s 00:10:23.919 sys 0m6.284s 00:10:23.919 10:54:10 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.919 ************************************ 00:10:23.919 END TEST nvme_fio 00:10:23.919 10:54:10 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:23.919 ************************************ 00:10:23.919 ************************************ 00:10:23.919 END TEST nvme 00:10:23.919 ************************************ 00:10:23.919 00:10:23.919 real 1m35.078s 00:10:23.919 user 3m43.198s 00:10:23.919 sys 0m25.443s 00:10:23.919 10:54:10 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.919 10:54:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:23.919 10:54:10 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:23.919 10:54:10 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:23.919 10:54:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:23.919 10:54:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.919 10:54:10 -- common/autotest_common.sh@10 -- # set +x 00:10:23.919 ************************************ 00:10:23.919 START TEST nvme_scc 00:10:23.919 ************************************ 00:10:23.919 10:54:10 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:23.919 * Looking for test storage... 00:10:23.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:23.919 10:54:10 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:23.919 10:54:10 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:23.919 10:54:10 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:23.919 10:54:10 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:23.919 10:54:10 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:24.179 10:54:10 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.179 10:54:10 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:24.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.179 --rc genhtml_branch_coverage=1 00:10:24.179 --rc genhtml_function_coverage=1 00:10:24.179 --rc genhtml_legend=1 00:10:24.179 --rc geninfo_all_blocks=1 00:10:24.179 --rc geninfo_unexecuted_blocks=1 00:10:24.179 00:10:24.179 ' 00:10:24.179 10:54:10 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:24.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.179 --rc genhtml_branch_coverage=1 00:10:24.179 --rc genhtml_function_coverage=1 00:10:24.179 --rc genhtml_legend=1 00:10:24.179 --rc geninfo_all_blocks=1 00:10:24.179 --rc geninfo_unexecuted_blocks=1 00:10:24.179 00:10:24.179 ' 00:10:24.179 10:54:10 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:24.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.179 --rc genhtml_branch_coverage=1 00:10:24.179 --rc genhtml_function_coverage=1 00:10:24.179 --rc genhtml_legend=1 00:10:24.179 --rc geninfo_all_blocks=1 00:10:24.179 --rc geninfo_unexecuted_blocks=1 00:10:24.179 00:10:24.179 ' 00:10:24.179 10:54:10 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:24.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.179 --rc genhtml_branch_coverage=1 00:10:24.179 --rc genhtml_function_coverage=1 00:10:24.179 --rc genhtml_legend=1 00:10:24.179 --rc geninfo_all_blocks=1 00:10:24.179 --rc geninfo_unexecuted_blocks=1 00:10:24.179 00:10:24.179 ' 00:10:24.179 10:54:10 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.179 10:54:10 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.179 10:54:10 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.179 10:54:10 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.179 10:54:10 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.179 10:54:10 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:24.179 10:54:10 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:24.179 10:54:10 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:24.179 10:54:10 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:24.179 10:54:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:24.179 10:54:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:24.179 10:54:10 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:24.179 10:54:10 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:24.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:24.748 Waiting for block devices as requested 00:10:25.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.006 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.265 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.265 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:30.570 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:30.570 10:54:17 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:30.570 10:54:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.570 10:54:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:30.570 10:54:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.570 10:54:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.570 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.571 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.572 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.573 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:30.574 10:54:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:30.574 10:54:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.574 10:54:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:30.575 10:54:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.575 10:54:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:30.575 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:30.576 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:30.577 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.578 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:30.579 10:54:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.579 10:54:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:30.579 10:54:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.579 10:54:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.579 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:30.580 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.581 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.582 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.846 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.847 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:30.848 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.849 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.850 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:30.851 10:54:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:30.851 10:54:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:30.851 10:54:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:30.852 10:54:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:30.852 10:54:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.852 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.853 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:30.854 10:54:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:30.854 10:54:17 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:30.855 10:54:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:30.855 10:54:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:30.855 10:54:17 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:31.792 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:32.359 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.359 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.359 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.359 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:32.619 10:54:19 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:32.619 10:54:19 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:32.619 10:54:19 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.619 10:54:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:32.619 ************************************ 00:10:32.619 START TEST nvme_simple_copy 00:10:32.619 ************************************ 00:10:32.619 10:54:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:32.878 Initializing NVMe Controllers 00:10:32.878 Attaching to 0000:00:10.0 00:10:32.878 Controller supports SCC. Attached to 0000:00:10.0 00:10:32.878 Namespace ID: 1 size: 6GB 00:10:32.878 Initialization complete. 00:10:32.878 00:10:32.878 Controller QEMU NVMe Ctrl (12340 ) 00:10:32.878 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:32.878 Namespace Block Size:4096 00:10:32.878 Writing LBAs 0 to 63 with Random Data 00:10:32.878 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:32.878 LBAs matching Written Data: 64 00:10:32.878 00:10:32.878 real 0m0.314s 00:10:32.878 user 0m0.111s 00:10:32.878 sys 0m0.102s 00:10:32.878 10:54:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.878 ************************************ 00:10:32.878 END TEST nvme_simple_copy 00:10:32.878 ************************************ 00:10:32.878 10:54:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:32.878 ************************************ 00:10:32.878 END TEST nvme_scc 00:10:32.878 ************************************ 00:10:32.878 00:10:32.878 real 0m9.074s 00:10:32.878 user 0m1.530s 00:10:32.878 sys 0m2.454s 00:10:32.878 10:54:19 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.878 10:54:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:32.878 10:54:19 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:32.878 10:54:19 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:32.878 10:54:19 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:32.878 10:54:19 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:32.878 10:54:19 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:32.878 10:54:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:32.878 10:54:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.878 10:54:19 -- common/autotest_common.sh@10 -- # set +x 00:10:32.878 ************************************ 00:10:32.878 START TEST nvme_fdp 00:10:32.878 ************************************ 00:10:32.878 10:54:19 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:33.138 * Looking for test storage... 00:10:33.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.138 --rc genhtml_branch_coverage=1 00:10:33.138 --rc genhtml_function_coverage=1 00:10:33.138 --rc genhtml_legend=1 00:10:33.138 --rc geninfo_all_blocks=1 00:10:33.138 --rc geninfo_unexecuted_blocks=1 00:10:33.138 00:10:33.138 ' 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.138 --rc genhtml_branch_coverage=1 00:10:33.138 --rc genhtml_function_coverage=1 00:10:33.138 --rc genhtml_legend=1 00:10:33.138 --rc geninfo_all_blocks=1 00:10:33.138 --rc geninfo_unexecuted_blocks=1 00:10:33.138 00:10:33.138 ' 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.138 --rc genhtml_branch_coverage=1 00:10:33.138 --rc genhtml_function_coverage=1 00:10:33.138 --rc genhtml_legend=1 00:10:33.138 --rc geninfo_all_blocks=1 00:10:33.138 --rc geninfo_unexecuted_blocks=1 00:10:33.138 00:10:33.138 ' 00:10:33.138 10:54:19 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.138 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.138 --rc genhtml_branch_coverage=1 00:10:33.138 --rc genhtml_function_coverage=1 00:10:33.138 --rc genhtml_legend=1 00:10:33.138 --rc geninfo_all_blocks=1 00:10:33.138 --rc geninfo_unexecuted_blocks=1 00:10:33.138 00:10:33.138 ' 00:10:33.138 10:54:19 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.138 10:54:19 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.138 10:54:19 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.138 10:54:19 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.138 10:54:19 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.138 10:54:19 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:33.138 10:54:19 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:33.138 10:54:19 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:33.138 10:54:19 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:33.138 10:54:19 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:33.747 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:34.006 Waiting for block devices as requested 00:10:34.006 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.264 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.522 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.804 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:39.804 10:54:26 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:39.804 10:54:26 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:39.804 10:54:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.804 10:54:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:39.805 10:54:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.805 10:54:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:39.805 10:54:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.805 10:54:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.805 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:39.806 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:39.807 10:54:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.808 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:39.809 10:54:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.809 10:54:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:39.809 10:54:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.809 10:54:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:39.809 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.810 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:39.811 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.812 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.813 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:39.814 10:54:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.814 10:54:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:39.814 10:54:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.814 10:54:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.814 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:39.815 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.816 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:39.817 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.818 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:39.819 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:39.820 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.821 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:39.822 10:54:26 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:39.822 10:54:26 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:39.822 10:54:26 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:39.822 10:54:26 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:39.822 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:39.823 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:39.824 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:39.825 10:54:26 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:39.825 10:54:26 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:39.825 10:54:26 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:40.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:41.331 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.331 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.331 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.590 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:41.590 10:54:28 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:41.590 10:54:28 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.590 10:54:28 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.590 10:54:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:41.590 ************************************ 00:10:41.590 START TEST nvme_flexible_data_placement 00:10:41.590 ************************************ 00:10:41.590 10:54:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:41.850 Initializing NVMe Controllers 00:10:41.850 Attaching to 0000:00:13.0 00:10:41.850 Controller supports FDP Attached to 0000:00:13.0 00:10:41.850 Namespace ID: 1 Endurance Group ID: 1 00:10:41.850 Initialization complete. 00:10:41.850 00:10:41.850 ================================== 00:10:41.850 == FDP tests for Namespace: #01 == 00:10:41.850 ================================== 00:10:41.850 00:10:41.850 Get Feature: FDP: 00:10:41.850 ================= 00:10:41.850 Enabled: Yes 00:10:41.850 FDP configuration Index: 0 00:10:41.850 00:10:41.850 FDP configurations log page 00:10:41.850 =========================== 00:10:41.850 Number of FDP configurations: 1 00:10:41.850 Version: 0 00:10:41.850 Size: 112 00:10:41.850 FDP Configuration Descriptor: 0 00:10:41.850 Descriptor Size: 96 00:10:41.850 Reclaim Group Identifier format: 2 00:10:41.850 FDP Volatile Write Cache: Not Present 00:10:41.850 FDP Configuration: Valid 00:10:41.850 Vendor Specific Size: 0 00:10:41.850 Number of Reclaim Groups: 2 00:10:41.850 Number of Recalim Unit Handles: 8 00:10:41.850 Max Placement Identifiers: 128 00:10:41.850 Number of Namespaces Suppprted: 256 00:10:41.850 Reclaim unit Nominal Size: 6000000 bytes 00:10:41.850 Estimated Reclaim Unit Time Limit: Not Reported 00:10:41.850 RUH Desc #000: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #001: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #002: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #003: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #004: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #005: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #006: RUH Type: Initially Isolated 00:10:41.850 RUH Desc #007: RUH Type: Initially Isolated 00:10:41.850 00:10:41.850 FDP reclaim unit handle usage log page 00:10:41.850 ====================================== 00:10:41.850 Number of Reclaim Unit Handles: 8 00:10:41.850 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:41.850 RUH Usage Desc #001: RUH Attributes: Unused 00:10:41.850 RUH Usage Desc #002: RUH Attributes: Unused 00:10:41.850 RUH Usage Desc #003: RUH Attributes: Unused 00:10:41.850 RUH Usage Desc #004: RUH Attributes: Unused 00:10:41.850 RUH Usage Desc #005: RUH Attributes: Unused 00:10:41.850 RUH Usage Desc #006: RUH Attributes: Unused 00:10:41.850 RUH Usage Desc #007: RUH Attributes: Unused 00:10:41.850 00:10:41.850 FDP statistics log page 00:10:41.850 ======================= 00:10:41.850 Host bytes with metadata written: 1014390784 00:10:41.850 Media bytes with metadata written: 1014571008 00:10:41.850 Media bytes erased: 0 00:10:41.850 00:10:41.850 FDP Reclaim unit handle status 00:10:41.850 ============================== 00:10:41.850 Number of RUHS descriptors: 2 00:10:41.850 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000589a 00:10:41.850 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:41.850 00:10:41.850 FDP write on placement id: 0 success 00:10:41.850 00:10:41.850 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:41.850 00:10:41.850 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:41.850 00:10:41.850 Get Feature: FDP Events for Placement handle: #0 00:10:41.850 ======================== 00:10:41.850 Number of FDP Events: 6 00:10:41.850 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:41.850 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:41.850 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:41.850 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:41.850 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:41.850 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:41.850 00:10:41.850 FDP events log page 00:10:41.850 =================== 00:10:41.850 Number of FDP events: 1 00:10:41.850 FDP Event #0: 00:10:41.850 Event Type: RU Not Written to Capacity 00:10:41.850 Placement Identifier: Valid 00:10:41.850 NSID: Valid 00:10:41.850 Location: Valid 00:10:41.850 Placement Identifier: 0 00:10:41.850 Event Timestamp: 7 00:10:41.850 Namespace Identifier: 1 00:10:41.850 Reclaim Group Identifier: 0 00:10:41.850 Reclaim Unit Handle Identifier: 0 00:10:41.850 00:10:41.850 FDP test passed 00:10:41.850 00:10:41.850 real 0m0.301s 00:10:41.850 user 0m0.103s 00:10:41.850 sys 0m0.098s 00:10:41.850 10:54:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.850 10:54:28 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:41.850 ************************************ 00:10:41.850 END TEST nvme_flexible_data_placement 00:10:41.850 ************************************ 00:10:41.850 ************************************ 00:10:41.850 END TEST nvme_fdp 00:10:41.850 ************************************ 00:10:41.850 00:10:41.850 real 0m8.948s 00:10:41.850 user 0m1.508s 00:10:41.850 sys 0m2.458s 00:10:41.850 10:54:28 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.850 10:54:28 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:41.850 10:54:28 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:41.850 10:54:28 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:41.850 10:54:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.850 10:54:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.850 10:54:28 -- common/autotest_common.sh@10 -- # set +x 00:10:42.110 ************************************ 00:10:42.110 START TEST nvme_rpc 00:10:42.110 ************************************ 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:42.110 * Looking for test storage... 00:10:42.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.110 10:54:28 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.110 --rc genhtml_branch_coverage=1 00:10:42.110 --rc genhtml_function_coverage=1 00:10:42.110 --rc genhtml_legend=1 00:10:42.110 --rc geninfo_all_blocks=1 00:10:42.110 --rc geninfo_unexecuted_blocks=1 00:10:42.110 00:10:42.110 ' 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.110 --rc genhtml_branch_coverage=1 00:10:42.110 --rc genhtml_function_coverage=1 00:10:42.110 --rc genhtml_legend=1 00:10:42.110 --rc geninfo_all_blocks=1 00:10:42.110 --rc geninfo_unexecuted_blocks=1 00:10:42.110 00:10:42.110 ' 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.110 --rc genhtml_branch_coverage=1 00:10:42.110 --rc genhtml_function_coverage=1 00:10:42.110 --rc genhtml_legend=1 00:10:42.110 --rc geninfo_all_blocks=1 00:10:42.110 --rc geninfo_unexecuted_blocks=1 00:10:42.110 00:10:42.110 ' 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.110 --rc genhtml_branch_coverage=1 00:10:42.110 --rc genhtml_function_coverage=1 00:10:42.110 --rc genhtml_legend=1 00:10:42.110 --rc geninfo_all_blocks=1 00:10:42.110 --rc geninfo_unexecuted_blocks=1 00:10:42.110 00:10:42.110 ' 00:10:42.110 10:54:28 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.110 10:54:28 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:42.110 10:54:28 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:42.111 10:54:28 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:42.370 10:54:28 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:42.370 10:54:28 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:42.370 10:54:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:42.370 10:54:29 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67079 00:10:42.370 10:54:29 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:42.370 10:54:29 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:42.370 10:54:29 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67079 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67079 ']' 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.370 10:54:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.370 [2024-11-15 10:54:29.180685] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:10:42.371 [2024-11-15 10:54:29.180804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67079 ] 00:10:42.630 [2024-11-15 10:54:29.360667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.630 [2024-11-15 10:54:29.475674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.630 [2024-11-15 10:54:29.475709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.567 10:54:30 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.567 10:54:30 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:43.567 10:54:30 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:43.826 Nvme0n1 00:10:43.826 10:54:30 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:43.826 10:54:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:44.084 request: 00:10:44.084 { 00:10:44.084 "bdev_name": "Nvme0n1", 00:10:44.084 "filename": "non_existing_file", 00:10:44.084 "method": "bdev_nvme_apply_firmware", 00:10:44.084 "req_id": 1 00:10:44.084 } 00:10:44.084 Got JSON-RPC error response 00:10:44.084 response: 00:10:44.084 { 00:10:44.084 "code": -32603, 00:10:44.084 "message": "open file failed." 00:10:44.084 } 00:10:44.084 10:54:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:44.084 10:54:30 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:44.084 10:54:30 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:44.342 10:54:31 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:44.342 10:54:31 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67079 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67079 ']' 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67079 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67079 00:10:44.342 killing process with pid 67079 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67079' 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67079 00:10:44.342 10:54:31 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67079 00:10:46.878 ************************************ 00:10:46.878 END TEST nvme_rpc 00:10:46.878 ************************************ 00:10:46.878 00:10:46.878 real 0m4.631s 00:10:46.878 user 0m8.414s 00:10:46.878 sys 0m0.792s 00:10:46.878 10:54:33 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.878 10:54:33 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 10:54:33 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:46.878 10:54:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.878 10:54:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.878 10:54:33 -- common/autotest_common.sh@10 -- # set +x 00:10:46.878 ************************************ 00:10:46.878 START TEST nvme_rpc_timeouts 00:10:46.878 ************************************ 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:46.878 * Looking for test storage... 00:10:46.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.878 10:54:33 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.878 --rc genhtml_branch_coverage=1 00:10:46.878 --rc genhtml_function_coverage=1 00:10:46.878 --rc genhtml_legend=1 00:10:46.878 --rc geninfo_all_blocks=1 00:10:46.878 --rc geninfo_unexecuted_blocks=1 00:10:46.878 00:10:46.878 ' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.878 --rc genhtml_branch_coverage=1 00:10:46.878 --rc genhtml_function_coverage=1 00:10:46.878 --rc genhtml_legend=1 00:10:46.878 --rc geninfo_all_blocks=1 00:10:46.878 --rc geninfo_unexecuted_blocks=1 00:10:46.878 00:10:46.878 ' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.878 --rc genhtml_branch_coverage=1 00:10:46.878 --rc genhtml_function_coverage=1 00:10:46.878 --rc genhtml_legend=1 00:10:46.878 --rc geninfo_all_blocks=1 00:10:46.878 --rc geninfo_unexecuted_blocks=1 00:10:46.878 00:10:46.878 ' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:46.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.878 --rc genhtml_branch_coverage=1 00:10:46.878 --rc genhtml_function_coverage=1 00:10:46.878 --rc genhtml_legend=1 00:10:46.878 --rc geninfo_all_blocks=1 00:10:46.878 --rc geninfo_unexecuted_blocks=1 00:10:46.878 00:10:46.878 ' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67156 00:10:46.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67156 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67188 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:46.878 10:54:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67188 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67188 ']' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.878 10:54:33 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:47.137 [2024-11-15 10:54:33.750673] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:10:47.137 [2024-11-15 10:54:33.750997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67188 ] 00:10:47.137 [2024-11-15 10:54:33.936125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:47.397 [2024-11-15 10:54:34.052215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.397 [2024-11-15 10:54:34.052248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.335 10:54:34 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:48.335 10:54:34 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:48.335 10:54:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:48.335 Checking default timeout settings: 00:10:48.335 10:54:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:48.594 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:48.594 Making settings changes with rpc: 00:10:48.594 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:48.853 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:48.853 Check default vs. modified settings: 00:10:48.853 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.113 Setting action_on_timeout is changed as expected. 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.113 Setting timeout_us is changed as expected. 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:49.113 Setting timeout_admin_us is changed as expected. 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67156 /tmp/settings_modified_67156 00:10:49.113 10:54:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67188 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67188 ']' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67188 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67188 00:10:49.113 killing process with pid 67188 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67188' 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67188 00:10:49.113 10:54:35 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67188 00:10:51.660 RPC TIMEOUT SETTING TEST PASSED. 00:10:51.660 10:54:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:51.660 ************************************ 00:10:51.660 END TEST nvme_rpc_timeouts 00:10:51.660 ************************************ 00:10:51.660 00:10:51.660 real 0m4.922s 00:10:51.660 user 0m9.242s 00:10:51.660 sys 0m0.816s 00:10:51.660 10:54:38 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.660 10:54:38 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:51.660 10:54:38 -- spdk/autotest.sh@239 -- # uname -s 00:10:51.660 10:54:38 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:51.660 10:54:38 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:51.660 10:54:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:51.660 10:54:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.660 10:54:38 -- common/autotest_common.sh@10 -- # set +x 00:10:51.660 ************************************ 00:10:51.660 START TEST sw_hotplug 00:10:51.660 ************************************ 00:10:51.660 10:54:38 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:51.960 * Looking for test storage... 00:10:51.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.960 10:54:38 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:51.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.960 --rc genhtml_branch_coverage=1 00:10:51.960 --rc genhtml_function_coverage=1 00:10:51.960 --rc genhtml_legend=1 00:10:51.960 --rc geninfo_all_blocks=1 00:10:51.960 --rc geninfo_unexecuted_blocks=1 00:10:51.960 00:10:51.960 ' 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:51.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.960 --rc genhtml_branch_coverage=1 00:10:51.960 --rc genhtml_function_coverage=1 00:10:51.960 --rc genhtml_legend=1 00:10:51.960 --rc geninfo_all_blocks=1 00:10:51.960 --rc geninfo_unexecuted_blocks=1 00:10:51.960 00:10:51.960 ' 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:51.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.960 --rc genhtml_branch_coverage=1 00:10:51.960 --rc genhtml_function_coverage=1 00:10:51.960 --rc genhtml_legend=1 00:10:51.960 --rc geninfo_all_blocks=1 00:10:51.960 --rc geninfo_unexecuted_blocks=1 00:10:51.960 00:10:51.960 ' 00:10:51.960 10:54:38 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:51.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.960 --rc genhtml_branch_coverage=1 00:10:51.960 --rc genhtml_function_coverage=1 00:10:51.960 --rc genhtml_legend=1 00:10:51.960 --rc geninfo_all_blocks=1 00:10:51.960 --rc geninfo_unexecuted_blocks=1 00:10:51.960 00:10:51.960 ' 00:10:51.960 10:54:38 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:52.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:52.788 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.788 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.788 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.788 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:52.788 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:52.788 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:52.788 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:52.788 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:52.788 10:54:39 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:52.788 10:54:39 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:52.788 10:54:39 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:52.788 10:54:39 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:52.788 10:54:39 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:52.788 10:54:39 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:52.789 10:54:39 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:52.789 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:52.789 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:52.789 10:54:39 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:53.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:53.616 Waiting for block devices as requested 00:10:53.616 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.875 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.875 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.133 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:59.401 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:59.401 10:54:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:59.401 10:54:45 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:59.660 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:59.919 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:59.919 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:00.178 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:00.436 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:00.436 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:00.696 10:54:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68081 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:00.696 10:54:47 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:00.696 10:54:47 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:00.696 10:54:47 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:00.696 10:54:47 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:00.696 10:54:47 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:00.696 10:54:47 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:00.955 Initializing NVMe Controllers 00:11:00.955 Attaching to 0000:00:10.0 00:11:00.955 Attaching to 0000:00:11.0 00:11:00.955 Attached to 0000:00:11.0 00:11:00.955 Attached to 0000:00:10.0 00:11:00.955 Initialization complete. Starting I/O... 00:11:00.955 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:00.955 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:00.955 00:11:01.890 QEMU NVMe Ctrl (12341 ): 1460 I/Os completed (+1460) 00:11:01.890 QEMU NVMe Ctrl (12340 ): 1460 I/Os completed (+1460) 00:11:01.890 00:11:03.265 QEMU NVMe Ctrl (12341 ): 3508 I/Os completed (+2048) 00:11:03.265 QEMU NVMe Ctrl (12340 ): 3508 I/Os completed (+2048) 00:11:03.265 00:11:04.200 QEMU NVMe Ctrl (12341 ): 5636 I/Os completed (+2128) 00:11:04.200 QEMU NVMe Ctrl (12340 ): 5637 I/Os completed (+2129) 00:11:04.200 00:11:05.176 QEMU NVMe Ctrl (12341 ): 7800 I/Os completed (+2164) 00:11:05.176 QEMU NVMe Ctrl (12340 ): 7801 I/Os completed (+2164) 00:11:05.176 00:11:06.112 QEMU NVMe Ctrl (12341 ): 9976 I/Os completed (+2176) 00:11:06.112 QEMU NVMe Ctrl (12340 ): 9977 I/Os completed (+2176) 00:11:06.112 00:11:06.680 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.680 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.680 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.680 [2024-11-15 10:54:53.468076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.680 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:06.680 [2024-11-15 10:54:53.469956] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.470007] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.470028] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.470049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:06.680 [2024-11-15 10:54:53.472748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.472799] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.472817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.472836] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.680 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.680 [2024-11-15 10:54:53.511348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.680 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:06.680 [2024-11-15 10:54:53.512933] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.680 [2024-11-15 10:54:53.512988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 [2024-11-15 10:54:53.513019] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 [2024-11-15 10:54:53.513039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:06.681 [2024-11-15 10:54:53.515499] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 [2024-11-15 10:54:53.515554] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 [2024-11-15 10:54:53.515576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 [2024-11-15 10:54:53.515595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.681 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:06.681 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.940 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.940 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.940 Attaching to 0000:00:10.0 00:11:06.940 Attached to 0000:00:10.0 00:11:07.199 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:07.199 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:07.199 10:54:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:07.199 Attaching to 0000:00:11.0 00:11:07.199 Attached to 0000:00:11.0 00:11:08.137 QEMU NVMe Ctrl (12340 ): 2008 I/Os completed (+2008) 00:11:08.137 QEMU NVMe Ctrl (12341 ): 1780 I/Os completed (+1780) 00:11:08.137 00:11:09.075 QEMU NVMe Ctrl (12340 ): 4192 I/Os completed (+2184) 00:11:09.075 QEMU NVMe Ctrl (12341 ): 3964 I/Os completed (+2184) 00:11:09.075 00:11:10.011 QEMU NVMe Ctrl (12340 ): 6400 I/Os completed (+2208) 00:11:10.011 QEMU NVMe Ctrl (12341 ): 6172 I/Os completed (+2208) 00:11:10.011 00:11:10.947 QEMU NVMe Ctrl (12340 ): 8604 I/Os completed (+2204) 00:11:10.948 QEMU NVMe Ctrl (12341 ): 8376 I/Os completed (+2204) 00:11:10.948 00:11:11.882 QEMU NVMe Ctrl (12340 ): 10816 I/Os completed (+2212) 00:11:11.882 QEMU NVMe Ctrl (12341 ): 10588 I/Os completed (+2212) 00:11:11.882 00:11:12.845 QEMU NVMe Ctrl (12340 ): 13024 I/Os completed (+2208) 00:11:12.845 QEMU NVMe Ctrl (12341 ): 12796 I/Os completed (+2208) 00:11:12.845 00:11:14.220 QEMU NVMe Ctrl (12340 ): 15148 I/Os completed (+2124) 00:11:14.220 QEMU NVMe Ctrl (12341 ): 14920 I/Os completed (+2124) 00:11:14.220 00:11:14.827 QEMU NVMe Ctrl (12340 ): 17288 I/Os completed (+2140) 00:11:14.827 QEMU NVMe Ctrl (12341 ): 17060 I/Os completed (+2140) 00:11:14.827 00:11:16.205 QEMU NVMe Ctrl (12340 ): 19444 I/Os completed (+2156) 00:11:16.205 QEMU NVMe Ctrl (12341 ): 19216 I/Os completed (+2156) 00:11:16.205 00:11:17.139 QEMU NVMe Ctrl (12340 ): 21640 I/Os completed (+2196) 00:11:17.139 QEMU NVMe Ctrl (12341 ): 21412 I/Os completed (+2196) 00:11:17.139 00:11:18.074 QEMU NVMe Ctrl (12340 ): 23824 I/Os completed (+2184) 00:11:18.074 QEMU NVMe Ctrl (12341 ): 23596 I/Os completed (+2184) 00:11:18.074 00:11:19.009 QEMU NVMe Ctrl (12340 ): 25968 I/Os completed (+2144) 00:11:19.009 QEMU NVMe Ctrl (12341 ): 25742 I/Os completed (+2146) 00:11:19.009 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.269 [2024-11-15 10:55:05.870330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.269 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:19.269 [2024-11-15 10:55:05.872053] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.872115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.872136] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.872160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:19.269 [2024-11-15 10:55:05.877284] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.877338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.877355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.877375] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.269 [2024-11-15 10:55:05.910121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.269 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:19.269 [2024-11-15 10:55:05.911729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.911780] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.911807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.911825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:19.269 [2024-11-15 10:55:05.914361] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.914407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.914429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 [2024-11-15 10:55:05.914449] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.269 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:19.269 EAL: Scan for (pci) bus failed. 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:19.269 10:55:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.269 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.269 Attaching to 0000:00:10.0 00:11:19.269 Attached to 0000:00:10.0 00:11:19.528 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.528 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.528 10:55:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.528 Attaching to 0000:00:11.0 00:11:19.528 Attached to 0000:00:11.0 00:11:20.096 QEMU NVMe Ctrl (12340 ): 1144 I/Os completed (+1144) 00:11:20.096 QEMU NVMe Ctrl (12341 ): 976 I/Os completed (+976) 00:11:20.096 00:11:21.035 QEMU NVMe Ctrl (12340 ): 3300 I/Os completed (+2156) 00:11:21.035 QEMU NVMe Ctrl (12341 ): 3132 I/Os completed (+2156) 00:11:21.035 00:11:21.972 QEMU NVMe Ctrl (12340 ): 5452 I/Os completed (+2152) 00:11:21.972 QEMU NVMe Ctrl (12341 ): 5284 I/Os completed (+2152) 00:11:21.972 00:11:22.910 QEMU NVMe Ctrl (12340 ): 7628 I/Os completed (+2176) 00:11:22.910 QEMU NVMe Ctrl (12341 ): 7460 I/Os completed (+2176) 00:11:22.910 00:11:23.847 QEMU NVMe Ctrl (12340 ): 9808 I/Os completed (+2180) 00:11:23.847 QEMU NVMe Ctrl (12341 ): 9640 I/Os completed (+2180) 00:11:23.847 00:11:25.246 QEMU NVMe Ctrl (12340 ): 11972 I/Os completed (+2164) 00:11:25.246 QEMU NVMe Ctrl (12341 ): 11806 I/Os completed (+2166) 00:11:25.246 00:11:25.814 QEMU NVMe Ctrl (12340 ): 14164 I/Os completed (+2192) 00:11:25.814 QEMU NVMe Ctrl (12341 ): 13998 I/Os completed (+2192) 00:11:25.814 00:11:27.192 QEMU NVMe Ctrl (12340 ): 16356 I/Os completed (+2192) 00:11:27.192 QEMU NVMe Ctrl (12341 ): 16190 I/Os completed (+2192) 00:11:27.192 00:11:28.128 QEMU NVMe Ctrl (12340 ): 18552 I/Os completed (+2196) 00:11:28.128 QEMU NVMe Ctrl (12341 ): 18386 I/Os completed (+2196) 00:11:28.128 00:11:29.079 QEMU NVMe Ctrl (12340 ): 20732 I/Os completed (+2180) 00:11:29.079 QEMU NVMe Ctrl (12341 ): 20566 I/Os completed (+2180) 00:11:29.079 00:11:30.017 QEMU NVMe Ctrl (12340 ): 22908 I/Os completed (+2176) 00:11:30.017 QEMU NVMe Ctrl (12341 ): 22742 I/Os completed (+2176) 00:11:30.017 00:11:30.955 QEMU NVMe Ctrl (12340 ): 25088 I/Os completed (+2180) 00:11:30.955 QEMU NVMe Ctrl (12341 ): 24922 I/Os completed (+2180) 00:11:30.955 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.523 [2024-11-15 10:55:18.215720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:31.523 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:31.523 [2024-11-15 10:55:18.217573] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.217737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.217791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.217895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:31.523 [2024-11-15 10:55:18.220862] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.220996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.221049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.221145] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.523 [2024-11-15 10:55:18.254317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:31.523 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:31.523 [2024-11-15 10:55:18.255887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.255939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.255962] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.255982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:31.523 [2024-11-15 10:55:18.258539] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.258576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.258600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 [2024-11-15 10:55:18.258617] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:31.523 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:31.523 EAL: Scan for (pci) bus failed. 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.523 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.783 Attaching to 0000:00:10.0 00:11:31.783 Attached to 0000:00:10.0 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.783 10:55:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.783 Attaching to 0000:00:11.0 00:11:31.783 Attached to 0000:00:11.0 00:11:31.783 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:31.783 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:31.783 [2024-11-15 10:55:18.601082] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:44.063 10:55:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:44.063 10:55:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:44.063 10:55:30 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.13 00:11:44.063 10:55:30 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.13 00:11:44.063 10:55:30 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:44.063 10:55:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.13 00:11:44.063 10:55:30 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.13 2 00:11:44.063 remove_attach_helper took 43.13s to complete (handling 2 nvme drive(s)) 10:55:30 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68081 00:11:50.627 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68081) - No such process 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68081 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68618 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:50.627 10:55:36 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68618 00:11:50.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.627 10:55:36 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68618 ']' 00:11:50.627 10:55:36 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.627 10:55:36 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.627 10:55:36 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.627 10:55:36 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.627 10:55:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 [2024-11-15 10:55:36.720928] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:11:50.627 [2024-11-15 10:55:36.721051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68618 ] 00:11:50.627 [2024-11-15 10:55:36.901979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.627 [2024-11-15 10:55:37.014361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:51.195 10:55:37 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:51.195 10:55:37 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:57.763 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.763 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.763 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.763 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.764 10:55:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.764 10:55:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.764 10:55:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.764 [2024-11-15 10:55:43.997289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:57.764 [2024-11-15 10:55:43.999850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:43.999897] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:43.999916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 [2024-11-15 10:55:43.999946] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:43.999959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:43.999975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 [2024-11-15 10:55:43.999989] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:44.000006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:44.000019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 [2024-11-15 10:55:44.000038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:44.000049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:44.000064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 10:55:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.764 [2024-11-15 10:55:44.396650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:57.764 [2024-11-15 10:55:44.399078] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:44.399123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:44.399143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 [2024-11-15 10:55:44.399167] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:44.399181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:44.399194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 [2024-11-15 10:55:44.399209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:44.399221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:44.399235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 [2024-11-15 10:55:44.399248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.764 [2024-11-15 10:55:44.399262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.764 [2024-11-15 10:55:44.399274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.764 10:55:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.764 10:55:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.764 10:55:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:57.764 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.023 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.282 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.282 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.282 10:55:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.492 10:55:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.492 10:55:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.492 10:55:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.492 10:55:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.492 10:55:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.492 10:55:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.492 [2024-11-15 10:55:57.076368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:10.492 [2024-11-15 10:55:57.078849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.492 [2024-11-15 10:55:57.078904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.492 [2024-11-15 10:55:57.078924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.492 [2024-11-15 10:55:57.078975] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.492 [2024-11-15 10:55:57.078990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.492 [2024-11-15 10:55:57.079005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.492 [2024-11-15 10:55:57.079018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.492 [2024-11-15 10:55:57.079032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.492 [2024-11-15 10:55:57.079044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.492 [2024-11-15 10:55:57.079058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.492 [2024-11-15 10:55:57.079069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.492 [2024-11-15 10:55:57.079083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.492 10:55:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:10.492 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:10.752 [2024-11-15 10:55:57.475684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:10.752 [2024-11-15 10:55:57.477969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.752 [2024-11-15 10:55:57.478010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.752 [2024-11-15 10:55:57.478033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.752 [2024-11-15 10:55:57.478054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.752 [2024-11-15 10:55:57.478068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.752 [2024-11-15 10:55:57.478080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.752 [2024-11-15 10:55:57.478095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.752 [2024-11-15 10:55:57.478106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.752 [2024-11-15 10:55:57.478120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.752 [2024-11-15 10:55:57.478133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.752 [2024-11-15 10:55:57.478146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.752 [2024-11-15 10:55:57.478157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.752 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:10.752 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.752 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.752 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.752 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.752 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.752 10:55:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.752 10:55:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.010 10:55:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.010 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.269 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.269 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.269 10:55:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:23.496 10:56:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:23.496 10:56:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:23.496 10:56:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:23.496 10:56:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.496 10:56:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.496 10:56:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.496 10:56:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.496 10:56:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 10:56:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.496 [2024-11-15 10:56:10.055561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:23.496 [2024-11-15 10:56:10.058451] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.496 [2024-11-15 10:56:10.058624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.496 [2024-11-15 10:56:10.058747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.496 [2024-11-15 10:56:10.058819] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.496 [2024-11-15 10:56:10.058948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.496 [2024-11-15 10:56:10.059014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.496 [2024-11-15 10:56:10.059031] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.496 [2024-11-15 10:56:10.059045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.496 [2024-11-15 10:56:10.059057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.496 [2024-11-15 10:56:10.059073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.496 [2024-11-15 10:56:10.059084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.496 [2024-11-15 10:56:10.059098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.496 10:56:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.496 10:56:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.496 10:56:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:23.496 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:23.754 [2024-11-15 10:56:10.554758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:23.754 [2024-11-15 10:56:10.557267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.754 [2024-11-15 10:56:10.557312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.754 [2024-11-15 10:56:10.557334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.754 [2024-11-15 10:56:10.557359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.754 [2024-11-15 10:56:10.557375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.754 [2024-11-15 10:56:10.557389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.754 [2024-11-15 10:56:10.557407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.754 [2024-11-15 10:56:10.557419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.754 [2024-11-15 10:56:10.557440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.754 [2024-11-15 10:56:10.557454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.754 [2024-11-15 10:56:10.557471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.754 [2024-11-15 10:56:10.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.012 10:56:10 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.012 10:56:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.012 10:56:10 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.012 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:24.270 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:24.270 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.270 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.270 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.270 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.270 10:56:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.270 10:56:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.270 10:56:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:36.489 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:36.489 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:36.489 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:36.489 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:36.489 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:36.489 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:36.489 10:56:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:12:36.490 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:36.490 10:56:23 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:36.490 10:56:23 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.047 10:56:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.047 10:56:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.047 [2024-11-15 10:56:29.199247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:43.047 [2024-11-15 10:56:29.201725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.201875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.201984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 [2024-11-15 10:56:29.202096] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.202180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.202276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 [2024-11-15 10:56:29.202383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.202463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.202518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 [2024-11-15 10:56:29.202710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.202746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.202802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 10:56:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:43.047 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:43.047 [2024-11-15 10:56:29.598621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:43.047 [2024-11-15 10:56:29.600454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.600501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.600665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 [2024-11-15 10:56:29.600706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.600725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.600738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 [2024-11-15 10:56:29.600755] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.600766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.600781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.047 [2024-11-15 10:56:29.600794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:43.047 [2024-11-15 10:56:29.600808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:43.047 [2024-11-15 10:56:29.600819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:43.048 10:56:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.048 10:56:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.048 10:56:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:43.048 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:43.307 10:56:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:43.307 10:56:30 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.542 10:56:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.542 10:56:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.542 10:56:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:55.542 10:56:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.542 10:56:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.542 [2024-11-15 10:56:42.278351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:55.542 [2024-11-15 10:56:42.280810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.542 [2024-11-15 10:56:42.280982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.542 [2024-11-15 10:56:42.281018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.542 [2024-11-15 10:56:42.281046] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.542 [2024-11-15 10:56:42.281059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.542 [2024-11-15 10:56:42.281074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.542 [2024-11-15 10:56:42.281089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.542 [2024-11-15 10:56:42.281107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.542 [2024-11-15 10:56:42.281119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.542 [2024-11-15 10:56:42.281135] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:55.542 [2024-11-15 10:56:42.281147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:55.542 [2024-11-15 10:56:42.281162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:55.542 10:56:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:55.542 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:56.109 [2024-11-15 10:56:42.677706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:56.110 [2024-11-15 10:56:42.679613] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.110 [2024-11-15 10:56:42.679656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.110 [2024-11-15 10:56:42.679676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.110 [2024-11-15 10:56:42.679700] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.110 [2024-11-15 10:56:42.679718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.110 [2024-11-15 10:56:42.679730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.110 [2024-11-15 10:56:42.679746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.110 [2024-11-15 10:56:42.679757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.110 [2024-11-15 10:56:42.679771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.110 [2024-11-15 10:56:42.679785] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:56.110 [2024-11-15 10:56:42.679799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.110 [2024-11-15 10:56:42.679810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:56.110 10:56:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.110 10:56:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.110 10:56:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.110 10:56:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:56.368 10:56:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.570 10:56:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.570 10:56:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.570 10:56:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.570 [2024-11-15 10:56:55.257502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:08.570 [2024-11-15 10:56:55.260592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.570 [2024-11-15 10:56:55.260749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.570 [2024-11-15 10:56:55.260912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.570 [2024-11-15 10:56:55.260988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.570 [2024-11-15 10:56:55.261084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.570 [2024-11-15 10:56:55.261172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.570 [2024-11-15 10:56:55.261271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.570 [2024-11-15 10:56:55.261320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.570 [2024-11-15 10:56:55.261377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.570 [2024-11-15 10:56:55.261512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:08.570 [2024-11-15 10:56:55.261565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:08.570 [2024-11-15 10:56:55.261757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:08.570 10:56:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.570 10:56:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:08.570 10:56:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:08.570 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:09.138 [2024-11-15 10:56:55.756784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:09.138 [2024-11-15 10:56:55.761459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.138 [2024-11-15 10:56:55.761666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.138 [2024-11-15 10:56:55.761788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.138 [2024-11-15 10:56:55.761859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.138 [2024-11-15 10:56:55.761959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.138 [2024-11-15 10:56:55.762017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.138 [2024-11-15 10:56:55.762114] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.138 [2024-11-15 10:56:55.762153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.138 [2024-11-15 10:56:55.762208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.138 [2024-11-15 10:56:55.762315] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:09.138 [2024-11-15 10:56:55.762358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.138 [2024-11-15 10:56:55.762410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:09.138 10:56:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.138 10:56:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:09.138 10:56:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:09.138 10:56:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:09.397 10:56:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:21.614 10:57:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.614 10:57:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.614 10:57:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:21.614 10:57:08 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:13:21.614 10:57:08 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:13:21.614 10:57:08 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:13:21.614 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:13:21.614 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:21.615 10:57:08 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68618 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68618 ']' 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68618 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68618 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68618' 00:13:21.615 killing process with pid 68618 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68618 00:13:21.615 10:57:08 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68618 00:13:24.168 10:57:10 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:25.002 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:25.002 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:25.261 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.261 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:25.261 00:13:25.261 real 2m33.657s 00:13:25.261 user 1m51.335s 00:13:25.261 sys 0m22.557s 00:13:25.261 ************************************ 00:13:25.261 END TEST sw_hotplug 00:13:25.261 ************************************ 00:13:25.261 10:57:12 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:25.261 10:57:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 10:57:12 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:25.521 10:57:12 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:25.521 10:57:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.521 10:57:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.521 10:57:12 -- common/autotest_common.sh@10 -- # set +x 00:13:25.521 ************************************ 00:13:25.521 START TEST nvme_xnvme 00:13:25.521 ************************************ 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:25.521 * Looking for test storage... 00:13:25.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:25.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.521 --rc genhtml_branch_coverage=1 00:13:25.521 --rc genhtml_function_coverage=1 00:13:25.521 --rc genhtml_legend=1 00:13:25.521 --rc geninfo_all_blocks=1 00:13:25.521 --rc geninfo_unexecuted_blocks=1 00:13:25.521 00:13:25.521 ' 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:25.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.521 --rc genhtml_branch_coverage=1 00:13:25.521 --rc genhtml_function_coverage=1 00:13:25.521 --rc genhtml_legend=1 00:13:25.521 --rc geninfo_all_blocks=1 00:13:25.521 --rc geninfo_unexecuted_blocks=1 00:13:25.521 00:13:25.521 ' 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:25.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.521 --rc genhtml_branch_coverage=1 00:13:25.521 --rc genhtml_function_coverage=1 00:13:25.521 --rc genhtml_legend=1 00:13:25.521 --rc geninfo_all_blocks=1 00:13:25.521 --rc geninfo_unexecuted_blocks=1 00:13:25.521 00:13:25.521 ' 00:13:25.521 10:57:12 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:25.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:25.521 --rc genhtml_branch_coverage=1 00:13:25.521 --rc genhtml_function_coverage=1 00:13:25.521 --rc genhtml_legend=1 00:13:25.521 --rc geninfo_all_blocks=1 00:13:25.521 --rc geninfo_unexecuted_blocks=1 00:13:25.521 00:13:25.521 ' 00:13:25.521 10:57:12 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:25.521 10:57:12 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:25.781 10:57:12 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:25.781 10:57:12 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:25.781 10:57:12 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:25.781 10:57:12 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.781 10:57:12 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.781 10:57:12 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.781 10:57:12 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:25.781 10:57:12 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:25.781 10:57:12 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:25.781 10:57:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:25.781 10:57:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:25.781 10:57:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.781 ************************************ 00:13:25.781 START TEST xnvme_to_malloc_dd_copy 00:13:25.781 ************************************ 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1129 -- # malloc_to_xnvme_copy 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:25.781 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:25.782 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:25.782 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:25.782 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:25.782 10:57:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:25.782 { 00:13:25.782 "subsystems": [ 00:13:25.782 { 00:13:25.782 "subsystem": "bdev", 00:13:25.782 "config": [ 00:13:25.782 { 00:13:25.782 "params": { 00:13:25.782 "block_size": 512, 00:13:25.782 "num_blocks": 2097152, 00:13:25.782 "name": "malloc0" 00:13:25.782 }, 00:13:25.782 "method": "bdev_malloc_create" 00:13:25.782 }, 00:13:25.782 { 00:13:25.782 "params": { 00:13:25.782 "io_mechanism": "libaio", 00:13:25.782 "filename": "/dev/nullb0", 00:13:25.782 "name": "null0" 00:13:25.782 }, 00:13:25.782 "method": "bdev_xnvme_create" 00:13:25.782 }, 00:13:25.782 { 00:13:25.782 "method": "bdev_wait_for_examine" 00:13:25.782 } 00:13:25.782 ] 00:13:25.782 } 00:13:25.782 ] 00:13:25.782 } 00:13:25.782 [2024-11-15 10:57:12.525771] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:25.782 [2024-11-15 10:57:12.525890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69991 ] 00:13:26.041 [2024-11-15 10:57:12.709296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.041 [2024-11-15 10:57:12.829328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.575  [2024-11-15T10:57:16.374Z] Copying: 253/1024 [MB] (253 MBps) [2024-11-15T10:57:17.755Z] Copying: 508/1024 [MB] (255 MBps) [2024-11-15T10:57:18.325Z] Copying: 768/1024 [MB] (259 MBps) [2024-11-15T10:57:22.525Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:13:35.664 00:13:35.664 10:57:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:35.664 10:57:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:35.664 10:57:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:35.664 10:57:22 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:35.664 { 00:13:35.664 "subsystems": [ 00:13:35.664 { 00:13:35.664 "subsystem": "bdev", 00:13:35.664 "config": [ 00:13:35.664 { 00:13:35.664 "params": { 00:13:35.664 "block_size": 512, 00:13:35.664 "num_blocks": 2097152, 00:13:35.664 "name": "malloc0" 00:13:35.664 }, 00:13:35.664 "method": "bdev_malloc_create" 00:13:35.664 }, 00:13:35.664 { 00:13:35.664 "params": { 00:13:35.664 "io_mechanism": "libaio", 00:13:35.664 "filename": "/dev/nullb0", 00:13:35.664 "name": "null0" 00:13:35.664 }, 00:13:35.664 "method": "bdev_xnvme_create" 00:13:35.664 }, 00:13:35.664 { 00:13:35.664 "method": "bdev_wait_for_examine" 00:13:35.664 } 00:13:35.664 ] 00:13:35.664 } 00:13:35.664 ] 00:13:35.664 } 00:13:35.664 [2024-11-15 10:57:22.388225] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:35.664 [2024-11-15 10:57:22.388868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70107 ] 00:13:35.922 [2024-11-15 10:57:22.570642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.922 [2024-11-15 10:57:22.688272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.455  [2024-11-15T10:57:26.252Z] Copying: 257/1024 [MB] (257 MBps) [2024-11-15T10:57:27.191Z] Copying: 496/1024 [MB] (239 MBps) [2024-11-15T10:57:28.565Z] Copying: 734/1024 [MB] (237 MBps) [2024-11-15T10:57:28.565Z] Copying: 972/1024 [MB] (237 MBps) [2024-11-15T10:57:32.748Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:13:45.887 00:13:45.887 10:57:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:45.887 10:57:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:45.887 10:57:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:45.887 10:57:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:45.887 10:57:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:45.887 10:57:32 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:45.887 { 00:13:45.887 "subsystems": [ 00:13:45.887 { 00:13:45.887 "subsystem": "bdev", 00:13:45.887 "config": [ 00:13:45.887 { 00:13:45.887 "params": { 00:13:45.887 "block_size": 512, 00:13:45.887 "num_blocks": 2097152, 00:13:45.887 "name": "malloc0" 00:13:45.887 }, 00:13:45.887 "method": "bdev_malloc_create" 00:13:45.887 }, 00:13:45.887 { 00:13:45.887 "params": { 00:13:45.887 "io_mechanism": "io_uring", 00:13:45.887 "filename": "/dev/nullb0", 00:13:45.887 "name": "null0" 00:13:45.887 }, 00:13:45.887 "method": "bdev_xnvme_create" 00:13:45.887 }, 00:13:45.887 { 00:13:45.887 "method": "bdev_wait_for_examine" 00:13:45.887 } 00:13:45.887 ] 00:13:45.887 } 00:13:45.887 ] 00:13:45.887 } 00:13:45.887 [2024-11-15 10:57:32.375303] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:45.887 [2024-11-15 10:57:32.375431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70224 ] 00:13:45.887 [2024-11-15 10:57:32.556509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.887 [2024-11-15 10:57:32.673233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.420  [2024-11-15T10:57:36.247Z] Copying: 269/1024 [MB] (269 MBps) [2024-11-15T10:57:37.183Z] Copying: 534/1024 [MB] (264 MBps) [2024-11-15T10:57:38.121Z] Copying: 803/1024 [MB] (268 MBps) [2024-11-15T10:57:42.308Z] Copying: 1024/1024 [MB] (average 268 MBps) 00:13:55.447 00:13:55.447 10:57:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:55.447 10:57:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:55.447 10:57:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:55.448 10:57:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:55.448 { 00:13:55.448 "subsystems": [ 00:13:55.448 { 00:13:55.448 "subsystem": "bdev", 00:13:55.448 "config": [ 00:13:55.448 { 00:13:55.448 "params": { 00:13:55.448 "block_size": 512, 00:13:55.448 "num_blocks": 2097152, 00:13:55.448 "name": "malloc0" 00:13:55.448 }, 00:13:55.448 "method": "bdev_malloc_create" 00:13:55.448 }, 00:13:55.448 { 00:13:55.448 "params": { 00:13:55.448 "io_mechanism": "io_uring", 00:13:55.448 "filename": "/dev/nullb0", 00:13:55.448 "name": "null0" 00:13:55.448 }, 00:13:55.448 "method": "bdev_xnvme_create" 00:13:55.448 }, 00:13:55.448 { 00:13:55.448 "method": "bdev_wait_for_examine" 00:13:55.448 } 00:13:55.448 ] 00:13:55.448 } 00:13:55.448 ] 00:13:55.448 } 00:13:55.448 [2024-11-15 10:57:41.926581] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:13:55.448 [2024-11-15 10:57:41.926702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70331 ] 00:13:55.448 [2024-11-15 10:57:42.108356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.448 [2024-11-15 10:57:42.222959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.981  [2024-11-15T10:57:45.778Z] Copying: 273/1024 [MB] (273 MBps) [2024-11-15T10:57:46.715Z] Copying: 546/1024 [MB] (273 MBps) [2024-11-15T10:57:47.652Z] Copying: 820/1024 [MB] (274 MBps) [2024-11-15T10:57:51.841Z] Copying: 1024/1024 [MB] (average 274 MBps) 00:14:04.980 00:14:04.980 10:57:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:04.980 10:57:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:04.980 00:14:04.980 real 0m38.957s 00:14:04.980 user 0m34.053s 00:14:04.980 sys 0m4.410s 00:14:04.980 10:57:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.980 10:57:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:04.980 ************************************ 00:14:04.980 END TEST xnvme_to_malloc_dd_copy 00:14:04.980 ************************************ 00:14:04.980 10:57:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:04.980 10:57:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.980 10:57:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.980 10:57:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.980 ************************************ 00:14:04.980 START TEST xnvme_bdevperf 00:14:04.980 ************************************ 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.980 10:57:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.980 { 00:14:04.980 "subsystems": [ 00:14:04.980 { 00:14:04.980 "subsystem": "bdev", 00:14:04.980 "config": [ 00:14:04.980 { 00:14:04.980 "params": { 00:14:04.980 "io_mechanism": "libaio", 00:14:04.980 "filename": "/dev/nullb0", 00:14:04.980 "name": "null0" 00:14:04.980 }, 00:14:04.980 "method": "bdev_xnvme_create" 00:14:04.980 }, 00:14:04.980 { 00:14:04.980 "method": "bdev_wait_for_examine" 00:14:04.981 } 00:14:04.981 ] 00:14:04.981 } 00:14:04.981 ] 00:14:04.981 } 00:14:04.981 [2024-11-15 10:57:51.546640] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:04.981 [2024-11-15 10:57:51.546751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70457 ] 00:14:04.981 [2024-11-15 10:57:51.729367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.240 [2024-11-15 10:57:51.846973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.527 Running I/O for 5 seconds... 00:14:07.440 153920.00 IOPS, 601.25 MiB/s [2024-11-15T10:57:55.237Z] 155232.00 IOPS, 606.38 MiB/s [2024-11-15T10:57:56.614Z] 155605.33 IOPS, 607.83 MiB/s [2024-11-15T10:57:57.550Z] 155824.00 IOPS, 608.69 MiB/s 00:14:10.689 Latency(us) 00:14:10.689 [2024-11-15T10:57:57.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.689 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:10.689 null0 : 5.00 156031.20 609.50 0.00 0.00 407.79 111.86 1816.06 00:14:10.689 [2024-11-15T10:57:57.550Z] =================================================================================================================== 00:14:10.689 [2024-11-15T10:57:57.550Z] Total : 156031.20 609.50 0.00 0.00 407.79 111.86 1816.06 00:14:11.625 10:57:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:11.625 10:57:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:11.625 10:57:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:11.625 10:57:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:11.625 10:57:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:11.625 10:57:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.625 { 00:14:11.625 "subsystems": [ 00:14:11.625 { 00:14:11.625 "subsystem": "bdev", 00:14:11.625 "config": [ 00:14:11.625 { 00:14:11.625 "params": { 00:14:11.625 "io_mechanism": "io_uring", 00:14:11.625 "filename": "/dev/nullb0", 00:14:11.625 "name": "null0" 00:14:11.625 }, 00:14:11.625 "method": "bdev_xnvme_create" 00:14:11.625 }, 00:14:11.625 { 00:14:11.625 "method": "bdev_wait_for_examine" 00:14:11.625 } 00:14:11.625 ] 00:14:11.625 } 00:14:11.625 ] 00:14:11.625 } 00:14:11.625 [2024-11-15 10:57:58.400633] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:11.625 [2024-11-15 10:57:58.400783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70537 ] 00:14:11.884 [2024-11-15 10:57:58.595431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.884 [2024-11-15 10:57:58.715077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.451 Running I/O for 5 seconds... 00:14:14.323 205184.00 IOPS, 801.50 MiB/s [2024-11-15T10:58:02.120Z] 203168.00 IOPS, 793.62 MiB/s [2024-11-15T10:58:03.056Z] 202944.00 IOPS, 792.75 MiB/s [2024-11-15T10:58:04.435Z] 201984.00 IOPS, 789.00 MiB/s 00:14:17.574 Latency(us) 00:14:17.574 [2024-11-15T10:58:04.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.574 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:17.574 null0 : 5.00 202574.01 791.30 0.00 0.00 313.59 180.13 1671.30 00:14:17.574 [2024-11-15T10:58:04.435Z] =================================================================================================================== 00:14:17.574 [2024-11-15T10:58:04.435Z] Total : 202574.01 791.30 0.00 0.00 313.59 180.13 1671.30 00:14:18.511 10:58:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:18.511 10:58:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:18.511 00:14:18.511 real 0m13.784s 00:14:18.511 user 0m10.371s 00:14:18.511 sys 0m3.198s 00:14:18.511 10:58:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.511 ************************************ 00:14:18.511 END TEST xnvme_bdevperf 00:14:18.511 ************************************ 00:14:18.511 10:58:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.511 ************************************ 00:14:18.511 END TEST nvme_xnvme 00:14:18.511 ************************************ 00:14:18.511 00:14:18.511 real 0m53.129s 00:14:18.511 user 0m44.614s 00:14:18.511 sys 0m7.816s 00:14:18.511 10:58:05 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.511 10:58:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.511 10:58:05 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:18.511 10:58:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:18.511 10:58:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.511 10:58:05 -- common/autotest_common.sh@10 -- # set +x 00:14:18.511 ************************************ 00:14:18.511 START TEST blockdev_xnvme 00:14:18.511 ************************************ 00:14:18.511 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:18.770 * Looking for test storage... 00:14:18.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.770 10:58:05 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.770 --rc genhtml_branch_coverage=1 00:14:18.770 --rc genhtml_function_coverage=1 00:14:18.770 --rc genhtml_legend=1 00:14:18.770 --rc geninfo_all_blocks=1 00:14:18.770 --rc geninfo_unexecuted_blocks=1 00:14:18.770 00:14:18.770 ' 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.770 --rc genhtml_branch_coverage=1 00:14:18.770 --rc genhtml_function_coverage=1 00:14:18.770 --rc genhtml_legend=1 00:14:18.770 --rc geninfo_all_blocks=1 00:14:18.770 --rc geninfo_unexecuted_blocks=1 00:14:18.770 00:14:18.770 ' 00:14:18.770 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:18.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.770 --rc genhtml_branch_coverage=1 00:14:18.770 --rc genhtml_function_coverage=1 00:14:18.770 --rc genhtml_legend=1 00:14:18.770 --rc geninfo_all_blocks=1 00:14:18.770 --rc geninfo_unexecuted_blocks=1 00:14:18.770 00:14:18.770 ' 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:18.771 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.771 --rc genhtml_branch_coverage=1 00:14:18.771 --rc genhtml_function_coverage=1 00:14:18.771 --rc genhtml_legend=1 00:14:18.771 --rc geninfo_all_blocks=1 00:14:18.771 --rc geninfo_unexecuted_blocks=1 00:14:18.771 00:14:18.771 ' 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70690 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70690 00:14:18.771 10:58:05 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 70690 ']' 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.771 10:58:05 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:19.029 [2024-11-15 10:58:05.696304] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:19.030 [2024-11-15 10:58:05.696431] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70690 ] 00:14:19.030 [2024-11-15 10:58:05.877084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.288 [2024-11-15 10:58:05.991378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.227 10:58:06 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:20.227 10:58:06 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:14:20.227 10:58:06 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:20.227 10:58:06 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:20.227 10:58:06 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:20.227 10:58:06 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:20.227 10:58:06 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:20.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:21.053 Waiting for block devices as requested 00:14:21.053 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.053 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.312 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:21.312 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:26.634 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:26.634 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:26.634 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:26.634 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:26.635 nvme0n1 00:14:26.635 nvme1n1 00:14:26.635 nvme2n1 00:14:26.635 nvme2n2 00:14:26.635 nvme2n3 00:14:26.635 nvme3n1 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:26.635 10:58:13 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:26.635 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:26.636 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9d68ccd2-bee5-4491-8f7d-8d26a7c6cfa4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9d68ccd2-bee5-4491-8f7d-8d26a7c6cfa4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1f37d0f0-a865-49b9-9cb0-4e3d793c0ed4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1f37d0f0-a865-49b9-9cb0-4e3d793c0ed4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bc26c574-cc15-4189-af59-6401be3e4dc4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc26c574-cc15-4189-af59-6401be3e4dc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "37c19225-23f5-48c6-8a26-990bd618502b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "37c19225-23f5-48c6-8a26-990bd618502b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "77b18a3c-83f4-43d4-ad5b-01c37b2cc9fd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "77b18a3c-83f4-43d4-ad5b-01c37b2cc9fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "69492a03-e111-44cf-a45c-16a20cd37c41"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "69492a03-e111-44cf-a45c-16a20cd37c41",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:26.636 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:26.636 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:26.636 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:26.636 10:58:13 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70690 00:14:26.636 10:58:13 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 70690 ']' 00:14:26.636 10:58:13 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 70690 00:14:26.636 10:58:13 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:14:26.636 10:58:13 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.636 10:58:13 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70690 00:14:26.895 killing process with pid 70690 00:14:26.895 10:58:13 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.895 10:58:13 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.895 10:58:13 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70690' 00:14:26.895 10:58:13 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 70690 00:14:26.895 10:58:13 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 70690 00:14:29.431 10:58:15 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:29.431 10:58:15 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:29.431 10:58:15 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:29.431 10:58:15 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.431 10:58:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.431 ************************************ 00:14:29.431 START TEST bdev_hello_world 00:14:29.431 ************************************ 00:14:29.431 10:58:15 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:29.431 [2024-11-15 10:58:15.988841] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:29.431 [2024-11-15 10:58:15.988980] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71066 ] 00:14:29.431 [2024-11-15 10:58:16.171109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.431 [2024-11-15 10:58:16.285514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.000 [2024-11-15 10:58:16.715926] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:30.000 [2024-11-15 10:58:16.715983] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:30.000 [2024-11-15 10:58:16.716020] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:30.000 [2024-11-15 10:58:16.718222] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:30.000 [2024-11-15 10:58:16.718714] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:30.000 [2024-11-15 10:58:16.718747] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:30.000 [2024-11-15 10:58:16.719036] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:30.000 00:14:30.000 [2024-11-15 10:58:16.719068] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:31.378 00:14:31.378 real 0m1.911s 00:14:31.378 user 0m1.532s 00:14:31.378 sys 0m0.262s 00:14:31.378 10:58:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.378 10:58:17 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:31.378 ************************************ 00:14:31.378 END TEST bdev_hello_world 00:14:31.378 ************************************ 00:14:31.378 10:58:17 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:31.378 10:58:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:31.378 10:58:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.378 10:58:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:31.378 ************************************ 00:14:31.378 START TEST bdev_bounds 00:14:31.378 ************************************ 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71108 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71108' 00:14:31.378 Process bdevio pid: 71108 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71108 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 71108 ']' 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.378 10:58:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:31.378 [2024-11-15 10:58:17.975404] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:31.378 [2024-11-15 10:58:17.975560] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71108 ] 00:14:31.378 [2024-11-15 10:58:18.161763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.638 [2024-11-15 10:58:18.283504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.638 [2024-11-15 10:58:18.283651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.638 [2024-11-15 10:58:18.283694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.206 10:58:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.206 10:58:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:14:32.206 10:58:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:32.206 I/O targets: 00:14:32.206 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:32.206 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:32.206 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:32.206 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:32.206 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:32.206 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:32.206 00:14:32.206 00:14:32.206 CUnit - A unit testing framework for C - Version 2.1-3 00:14:32.206 http://cunit.sourceforge.net/ 00:14:32.206 00:14:32.206 00:14:32.206 Suite: bdevio tests on: nvme3n1 00:14:32.206 Test: blockdev write read block ...passed 00:14:32.206 Test: blockdev write zeroes read block ...passed 00:14:32.206 Test: blockdev write zeroes read no split ...passed 00:14:32.206 Test: blockdev write zeroes read split ...passed 00:14:32.206 Test: blockdev write zeroes read split partial ...passed 00:14:32.206 Test: blockdev reset ...passed 00:14:32.206 Test: blockdev write read 8 blocks ...passed 00:14:32.206 Test: blockdev write read size > 128k ...passed 00:14:32.206 Test: blockdev write read invalid size ...passed 00:14:32.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.206 Test: blockdev write read max offset ...passed 00:14:32.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.206 Test: blockdev writev readv 8 blocks ...passed 00:14:32.206 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.206 Test: blockdev writev readv block ...passed 00:14:32.206 Test: blockdev writev readv size > 128k ...passed 00:14:32.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.206 Test: blockdev comparev and writev ...passed 00:14:32.206 Test: blockdev nvme passthru rw ...passed 00:14:32.206 Test: blockdev nvme passthru vendor specific ...passed 00:14:32.206 Test: blockdev nvme admin passthru ...passed 00:14:32.206 Test: blockdev copy ...passed 00:14:32.206 Suite: bdevio tests on: nvme2n3 00:14:32.206 Test: blockdev write read block ...passed 00:14:32.206 Test: blockdev write zeroes read block ...passed 00:14:32.206 Test: blockdev write zeroes read no split ...passed 00:14:32.206 Test: blockdev write zeroes read split ...passed 00:14:32.206 Test: blockdev write zeroes read split partial ...passed 00:14:32.206 Test: blockdev reset ...passed 00:14:32.206 Test: blockdev write read 8 blocks ...passed 00:14:32.206 Test: blockdev write read size > 128k ...passed 00:14:32.206 Test: blockdev write read invalid size ...passed 00:14:32.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.206 Test: blockdev write read max offset ...passed 00:14:32.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.206 Test: blockdev writev readv 8 blocks ...passed 00:14:32.206 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.206 Test: blockdev writev readv block ...passed 00:14:32.206 Test: blockdev writev readv size > 128k ...passed 00:14:32.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.207 Test: blockdev comparev and writev ...passed 00:14:32.207 Test: blockdev nvme passthru rw ...passed 00:14:32.207 Test: blockdev nvme passthru vendor specific ...passed 00:14:32.207 Test: blockdev nvme admin passthru ...passed 00:14:32.207 Test: blockdev copy ...passed 00:14:32.207 Suite: bdevio tests on: nvme2n2 00:14:32.207 Test: blockdev write read block ...passed 00:14:32.207 Test: blockdev write zeroes read block ...passed 00:14:32.207 Test: blockdev write zeroes read no split ...passed 00:14:32.466 Test: blockdev write zeroes read split ...passed 00:14:32.466 Test: blockdev write zeroes read split partial ...passed 00:14:32.466 Test: blockdev reset ...passed 00:14:32.466 Test: blockdev write read 8 blocks ...passed 00:14:32.466 Test: blockdev write read size > 128k ...passed 00:14:32.466 Test: blockdev write read invalid size ...passed 00:14:32.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.466 Test: blockdev write read max offset ...passed 00:14:32.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.466 Test: blockdev writev readv 8 blocks ...passed 00:14:32.466 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.466 Test: blockdev writev readv block ...passed 00:14:32.466 Test: blockdev writev readv size > 128k ...passed 00:14:32.466 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.466 Test: blockdev comparev and writev ...passed 00:14:32.466 Test: blockdev nvme passthru rw ...passed 00:14:32.466 Test: blockdev nvme passthru vendor specific ...passed 00:14:32.466 Test: blockdev nvme admin passthru ...passed 00:14:32.466 Test: blockdev copy ...passed 00:14:32.466 Suite: bdevio tests on: nvme2n1 00:14:32.466 Test: blockdev write read block ...passed 00:14:32.466 Test: blockdev write zeroes read block ...passed 00:14:32.466 Test: blockdev write zeroes read no split ...passed 00:14:32.466 Test: blockdev write zeroes read split ...passed 00:14:32.466 Test: blockdev write zeroes read split partial ...passed 00:14:32.466 Test: blockdev reset ...passed 00:14:32.466 Test: blockdev write read 8 blocks ...passed 00:14:32.466 Test: blockdev write read size > 128k ...passed 00:14:32.466 Test: blockdev write read invalid size ...passed 00:14:32.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.466 Test: blockdev write read max offset ...passed 00:14:32.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.466 Test: blockdev writev readv 8 blocks ...passed 00:14:32.466 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.466 Test: blockdev writev readv block ...passed 00:14:32.466 Test: blockdev writev readv size > 128k ...passed 00:14:32.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.467 Test: blockdev comparev and writev ...passed 00:14:32.467 Test: blockdev nvme passthru rw ...passed 00:14:32.467 Test: blockdev nvme passthru vendor specific ...passed 00:14:32.467 Test: blockdev nvme admin passthru ...passed 00:14:32.467 Test: blockdev copy ...passed 00:14:32.467 Suite: bdevio tests on: nvme1n1 00:14:32.467 Test: blockdev write read block ...passed 00:14:32.467 Test: blockdev write zeroes read block ...passed 00:14:32.467 Test: blockdev write zeroes read no split ...passed 00:14:32.467 Test: blockdev write zeroes read split ...passed 00:14:32.467 Test: blockdev write zeroes read split partial ...passed 00:14:32.467 Test: blockdev reset ...passed 00:14:32.467 Test: blockdev write read 8 blocks ...passed 00:14:32.467 Test: blockdev write read size > 128k ...passed 00:14:32.467 Test: blockdev write read invalid size ...passed 00:14:32.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.467 Test: blockdev write read max offset ...passed 00:14:32.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.467 Test: blockdev writev readv 8 blocks ...passed 00:14:32.467 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.467 Test: blockdev writev readv block ...passed 00:14:32.467 Test: blockdev writev readv size > 128k ...passed 00:14:32.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.467 Test: blockdev comparev and writev ...passed 00:14:32.467 Test: blockdev nvme passthru rw ...passed 00:14:32.467 Test: blockdev nvme passthru vendor specific ...passed 00:14:32.467 Test: blockdev nvme admin passthru ...passed 00:14:32.467 Test: blockdev copy ...passed 00:14:32.467 Suite: bdevio tests on: nvme0n1 00:14:32.467 Test: blockdev write read block ...passed 00:14:32.467 Test: blockdev write zeroes read block ...passed 00:14:32.467 Test: blockdev write zeroes read no split ...passed 00:14:32.726 Test: blockdev write zeroes read split ...passed 00:14:32.726 Test: blockdev write zeroes read split partial ...passed 00:14:32.726 Test: blockdev reset ...passed 00:14:32.726 Test: blockdev write read 8 blocks ...passed 00:14:32.726 Test: blockdev write read size > 128k ...passed 00:14:32.726 Test: blockdev write read invalid size ...passed 00:14:32.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.726 Test: blockdev write read max offset ...passed 00:14:32.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.726 Test: blockdev writev readv 8 blocks ...passed 00:14:32.726 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.726 Test: blockdev writev readv block ...passed 00:14:32.726 Test: blockdev writev readv size > 128k ...passed 00:14:32.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.726 Test: blockdev comparev and writev ...passed 00:14:32.726 Test: blockdev nvme passthru rw ...passed 00:14:32.726 Test: blockdev nvme passthru vendor specific ...passed 00:14:32.726 Test: blockdev nvme admin passthru ...passed 00:14:32.726 Test: blockdev copy ...passed 00:14:32.726 00:14:32.726 Run Summary: Type Total Ran Passed Failed Inactive 00:14:32.726 suites 6 6 n/a 0 0 00:14:32.726 tests 138 138 138 0 0 00:14:32.727 asserts 780 780 780 0 n/a 00:14:32.727 00:14:32.727 Elapsed time = 1.281 seconds 00:14:32.727 0 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71108 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 71108 ']' 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 71108 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71108 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.727 killing process with pid 71108 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71108' 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 71108 00:14:32.727 10:58:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 71108 00:14:34.106 10:58:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:34.106 00:14:34.106 real 0m2.682s 00:14:34.106 user 0m6.573s 00:14:34.106 sys 0m0.432s 00:14:34.106 10:58:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.106 ************************************ 00:14:34.106 END TEST bdev_bounds 00:14:34.106 ************************************ 00:14:34.106 10:58:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:34.106 10:58:20 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:34.106 10:58:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:34.106 10:58:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.106 10:58:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:34.106 ************************************ 00:14:34.106 START TEST bdev_nbd 00:14:34.106 ************************************ 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71164 00:14:34.106 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71164 /var/tmp/spdk-nbd.sock 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 71164 ']' 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.107 10:58:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:34.107 [2024-11-15 10:58:20.741585] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:34.107 [2024-11-15 10:58:20.741705] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.107 [2024-11-15 10:58:20.925260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.366 [2024-11-15 10:58:21.044680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:34.934 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:35.193 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.194 1+0 records in 00:14:35.194 1+0 records out 00:14:35.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529313 s, 7.7 MB/s 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:35.194 10:58:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.194 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.453 1+0 records in 00:14:35.453 1+0 records out 00:14:35.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00164732 s, 2.5 MB/s 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:35.453 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.712 1+0 records in 00:14:35.712 1+0 records out 00:14:35.712 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000584347 s, 7.0 MB/s 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.712 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.972 1+0 records in 00:14:35.972 1+0 records out 00:14:35.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761753 s, 5.4 MB/s 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:35.972 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.232 1+0 records in 00:14:36.232 1+0 records out 00:14:36.232 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000686236 s, 6.0 MB/s 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.232 10:58:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:36.232 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:36.232 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:36.232 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:36.232 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:14:36.232 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:36.232 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:36.492 1+0 records in 00:14:36.492 1+0 records out 00:14:36.492 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737002 s, 5.6 MB/s 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd0", 00:14:36.492 "bdev_name": "nvme0n1" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd1", 00:14:36.492 "bdev_name": "nvme1n1" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd2", 00:14:36.492 "bdev_name": "nvme2n1" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd3", 00:14:36.492 "bdev_name": "nvme2n2" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd4", 00:14:36.492 "bdev_name": "nvme2n3" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd5", 00:14:36.492 "bdev_name": "nvme3n1" 00:14:36.492 } 00:14:36.492 ]' 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:36.492 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd0", 00:14:36.492 "bdev_name": "nvme0n1" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd1", 00:14:36.492 "bdev_name": "nvme1n1" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd2", 00:14:36.492 "bdev_name": "nvme2n1" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd3", 00:14:36.492 "bdev_name": "nvme2n2" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd4", 00:14:36.492 "bdev_name": "nvme2n3" 00:14:36.492 }, 00:14:36.492 { 00:14:36.492 "nbd_device": "/dev/nbd5", 00:14:36.492 "bdev_name": "nvme3n1" 00:14:36.492 } 00:14:36.492 ]' 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:36.751 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.010 10:58:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.270 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.528 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:37.787 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:38.048 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:38.309 10:58:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:38.568 /dev/nbd0 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.568 1+0 records in 00:14:38.568 1+0 records out 00:14:38.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564518 s, 7.3 MB/s 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.568 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:38.569 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.569 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:38.569 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:38.828 /dev/nbd1 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.828 1+0 records in 00:14:38.828 1+0 records out 00:14:38.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00363719 s, 1.1 MB/s 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:38.828 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:38.828 /dev/nbd10 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.088 1+0 records in 00:14:39.088 1+0 records out 00:14:39.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733052 s, 5.6 MB/s 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:39.088 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:39.088 /dev/nbd11 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.347 1+0 records in 00:14:39.347 1+0 records out 00:14:39.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000732714 s, 5.6 MB/s 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:39.347 10:58:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:39.347 /dev/nbd12 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.607 1+0 records in 00:14:39.607 1+0 records out 00:14:39.607 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850707 s, 4.8 MB/s 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:39.607 /dev/nbd13 00:14:39.607 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.899 1+0 records in 00:14:39.899 1+0 records out 00:14:39.899 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000681886 s, 6.0 MB/s 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd0", 00:14:39.899 "bdev_name": "nvme0n1" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd1", 00:14:39.899 "bdev_name": "nvme1n1" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd10", 00:14:39.899 "bdev_name": "nvme2n1" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd11", 00:14:39.899 "bdev_name": "nvme2n2" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd12", 00:14:39.899 "bdev_name": "nvme2n3" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd13", 00:14:39.899 "bdev_name": "nvme3n1" 00:14:39.899 } 00:14:39.899 ]' 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd0", 00:14:39.899 "bdev_name": "nvme0n1" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd1", 00:14:39.899 "bdev_name": "nvme1n1" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd10", 00:14:39.899 "bdev_name": "nvme2n1" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd11", 00:14:39.899 "bdev_name": "nvme2n2" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd12", 00:14:39.899 "bdev_name": "nvme2n3" 00:14:39.899 }, 00:14:39.899 { 00:14:39.899 "nbd_device": "/dev/nbd13", 00:14:39.899 "bdev_name": "nvme3n1" 00:14:39.899 } 00:14:39.899 ]' 00:14:39.899 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:40.158 /dev/nbd1 00:14:40.158 /dev/nbd10 00:14:40.158 /dev/nbd11 00:14:40.158 /dev/nbd12 00:14:40.158 /dev/nbd13' 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:40.158 /dev/nbd1 00:14:40.158 /dev/nbd10 00:14:40.158 /dev/nbd11 00:14:40.158 /dev/nbd12 00:14:40.158 /dev/nbd13' 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:40.158 256+0 records in 00:14:40.158 256+0 records out 00:14:40.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122785 s, 85.4 MB/s 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:40.158 256+0 records in 00:14:40.158 256+0 records out 00:14:40.158 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122964 s, 8.5 MB/s 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:40.158 10:58:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:40.416 256+0 records in 00:14:40.416 256+0 records out 00:14:40.416 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.150318 s, 7.0 MB/s 00:14:40.416 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:40.417 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:40.417 256+0 records in 00:14:40.417 256+0 records out 00:14:40.417 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122528 s, 8.6 MB/s 00:14:40.417 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:40.417 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:40.676 256+0 records in 00:14:40.676 256+0 records out 00:14:40.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124207 s, 8.4 MB/s 00:14:40.676 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:40.676 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:40.676 256+0 records in 00:14:40.676 256+0 records out 00:14:40.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126747 s, 8.3 MB/s 00:14:40.676 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:40.676 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:40.935 256+0 records in 00:14:40.935 256+0 records out 00:14:40.935 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121185 s, 8.7 MB/s 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:40.935 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.194 10:58:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.453 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:41.712 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:41.970 10:58:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.228 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:42.486 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:42.487 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.487 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:42.487 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:42.745 malloc_lvol_verify 00:14:42.745 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:43.003 ff88e1a4-5d8e-4dd6-aad1-54cd3609078b 00:14:43.003 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:43.261 b24dbfeb-b9ce-4ce8-a9e0-dc6650a6fefe 00:14:43.261 10:58:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:43.261 /dev/nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:43.520 mke2fs 1.47.0 (5-Feb-2023) 00:14:43.520 Discarding device blocks: 0/4096 done 00:14:43.520 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:43.520 00:14:43.520 Allocating group tables: 0/1 done 00:14:43.520 Writing inode tables: 0/1 done 00:14:43.520 Creating journal (1024 blocks): done 00:14:43.520 Writing superblocks and filesystem accounting information: 0/1 done 00:14:43.520 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71164 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 71164 ']' 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 71164 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:14:43.520 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.778 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71164 00:14:43.778 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:43.778 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:43.778 killing process with pid 71164 00:14:43.778 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71164' 00:14:43.778 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 71164 00:14:43.778 10:58:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 71164 00:14:45.155 10:58:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:45.155 00:14:45.155 real 0m10.958s 00:14:45.155 user 0m14.064s 00:14:45.155 sys 0m4.692s 00:14:45.155 10:58:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:45.155 10:58:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:45.155 ************************************ 00:14:45.155 END TEST bdev_nbd 00:14:45.155 ************************************ 00:14:45.155 10:58:31 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:45.155 10:58:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:45.155 10:58:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:45.155 10:58:31 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:45.155 10:58:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:45.155 10:58:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.155 10:58:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:45.155 ************************************ 00:14:45.155 START TEST bdev_fio 00:14:45.155 ************************************ 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:45.155 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:45.155 ************************************ 00:14:45.155 START TEST bdev_fio_rw_verify 00:14:45.155 ************************************ 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:14:45.155 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:45.156 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:45.156 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:14:45.156 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:45.156 10:58:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:45.414 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:45.414 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:45.414 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:45.414 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:45.414 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:45.414 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:45.414 fio-3.35 00:14:45.414 Starting 6 threads 00:14:57.671 00:14:57.671 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71578: Fri Nov 15 10:58:42 2024 00:14:57.671 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(1301MiB/10001msec) 00:14:57.671 slat (usec): min=2, max=539, avg= 6.23, stdev= 3.29 00:14:57.671 clat (usec): min=92, max=4419, avg=592.73, stdev=156.13 00:14:57.671 lat (usec): min=98, max=4425, avg=598.96, stdev=156.79 00:14:57.671 clat percentiles (usec): 00:14:57.671 | 50.000th=[ 619], 99.000th=[ 955], 99.900th=[ 1680], 99.990th=[ 2999], 00:14:57.671 | 99.999th=[ 4424] 00:14:57.671 write: IOPS=33.7k, BW=131MiB/s (138MB/s)(1315MiB/10001msec); 0 zone resets 00:14:57.671 slat (usec): min=11, max=1149, avg=18.76, stdev=17.20 00:14:57.671 clat (usec): min=80, max=2575, avg=642.25, stdev=149.84 00:14:57.671 lat (usec): min=104, max=2592, avg=661.01, stdev=151.03 00:14:57.671 clat percentiles (usec): 00:14:57.671 | 50.000th=[ 660], 99.000th=[ 1074], 99.900th=[ 1516], 99.990th=[ 2311], 00:14:57.671 | 99.999th=[ 2540] 00:14:57.671 bw ( KiB/s): min=107576, max=150865, per=99.97%, avg=134604.47, stdev=2241.66, samples=114 00:14:57.671 iops : min=26894, max=37716, avg=33650.95, stdev=560.40, samples=114 00:14:57.671 lat (usec) : 100=0.01%, 250=2.79%, 500=13.30%, 750=73.60%, 1000=9.15% 00:14:57.671 lat (msec) : 2=1.11%, 4=0.04%, 10=0.01% 00:14:57.671 cpu : usr=62.72%, sys=27.35%, ctx=7897, majf=0, minf=27662 00:14:57.671 IO depths : 1=12.2%, 2=24.7%, 4=50.3%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:57.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.671 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:57.671 issued rwts: total=332986,336650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:57.671 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:57.671 00:14:57.671 Run status group 0 (all jobs): 00:14:57.671 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1301MiB (1364MB), run=10001-10001msec 00:14:57.671 WRITE: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=1315MiB (1379MB), run=10001-10001msec 00:14:57.671 ----------------------------------------------------- 00:14:57.671 Suppressions used: 00:14:57.671 count bytes template 00:14:57.671 6 48 /usr/src/fio/parse.c 00:14:57.671 3419 328224 /usr/src/fio/iolog.c 00:14:57.671 1 8 libtcmalloc_minimal.so 00:14:57.671 1 904 libcrypto.so 00:14:57.671 ----------------------------------------------------- 00:14:57.671 00:14:57.671 00:14:57.671 real 0m12.513s 00:14:57.671 user 0m39.604s 00:14:57.671 sys 0m16.840s 00:14:57.671 ************************************ 00:14:57.671 END TEST bdev_fio_rw_verify 00:14:57.671 ************************************ 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:14:57.671 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "9d68ccd2-bee5-4491-8f7d-8d26a7c6cfa4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "9d68ccd2-bee5-4491-8f7d-8d26a7c6cfa4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1f37d0f0-a865-49b9-9cb0-4e3d793c0ed4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "1f37d0f0-a865-49b9-9cb0-4e3d793c0ed4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "bc26c574-cc15-4189-af59-6401be3e4dc4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "bc26c574-cc15-4189-af59-6401be3e4dc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "37c19225-23f5-48c6-8a26-990bd618502b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "37c19225-23f5-48c6-8a26-990bd618502b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "77b18a3c-83f4-43d4-ad5b-01c37b2cc9fd"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "77b18a3c-83f4-43d4-ad5b-01c37b2cc9fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "69492a03-e111-44cf-a45c-16a20cd37c41"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "69492a03-e111-44cf-a45c-16a20cd37c41",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:57.672 /home/vagrant/spdk_repo/spdk 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:14:57.672 00:14:57.672 real 0m12.741s 00:14:57.672 user 0m39.722s 00:14:57.672 sys 0m16.955s 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.672 10:58:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:57.672 ************************************ 00:14:57.672 END TEST bdev_fio 00:14:57.672 ************************************ 00:14:57.672 10:58:44 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:57.672 10:58:44 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:57.672 10:58:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:14:57.672 10:58:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.672 10:58:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:57.672 ************************************ 00:14:57.672 START TEST bdev_verify 00:14:57.672 ************************************ 00:14:57.672 10:58:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:57.931 [2024-11-15 10:58:44.579393] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:14:57.931 [2024-11-15 10:58:44.579559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71757 ] 00:14:57.931 [2024-11-15 10:58:44.763771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:58.189 [2024-11-15 10:58:44.883756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.189 [2024-11-15 10:58:44.883789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.756 Running I/O for 5 seconds... 00:15:01.070 23360.00 IOPS, 91.25 MiB/s [2024-11-15T10:58:48.868Z] 23888.00 IOPS, 93.31 MiB/s [2024-11-15T10:58:49.804Z] 23978.67 IOPS, 93.67 MiB/s [2024-11-15T10:58:50.742Z] 24016.00 IOPS, 93.81 MiB/s [2024-11-15T10:58:50.742Z] 23801.60 IOPS, 92.97 MiB/s 00:15:03.881 Latency(us) 00:15:03.881 [2024-11-15T10:58:50.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.881 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x0 length 0xa0000 00:15:03.881 nvme0n1 : 5.03 1833.93 7.16 0.00 0.00 69679.48 13159.84 69483.95 00:15:03.881 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0xa0000 length 0xa0000 00:15:03.881 nvme0n1 : 5.05 1722.99 6.73 0.00 0.00 74173.14 13475.68 69062.84 00:15:03.881 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x0 length 0xbd0bd 00:15:03.881 nvme1n1 : 5.06 2837.96 11.09 0.00 0.00 44863.54 5685.05 55587.16 00:15:03.881 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:03.881 nvme1n1 : 5.06 2801.34 10.94 0.00 0.00 45315.17 5500.81 59798.31 00:15:03.881 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x0 length 0x80000 00:15:03.881 nvme2n1 : 5.07 1867.64 7.30 0.00 0.00 67984.81 6711.52 61482.77 00:15:03.881 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x80000 length 0x80000 00:15:03.881 nvme2n1 : 5.07 1742.26 6.81 0.00 0.00 72883.84 7474.79 72431.76 00:15:03.881 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x0 length 0x80000 00:15:03.881 nvme2n2 : 5.07 1844.71 7.21 0.00 0.00 68659.94 7895.90 65272.80 00:15:03.881 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x80000 length 0x80000 00:15:03.881 nvme2n2 : 5.06 1719.38 6.72 0.00 0.00 73676.45 11580.66 69483.95 00:15:03.881 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x0 length 0x80000 00:15:03.881 nvme2n3 : 5.07 1841.42 7.19 0.00 0.00 68668.14 7632.71 65693.92 00:15:03.881 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x80000 length 0x80000 00:15:03.881 nvme2n3 : 5.08 1739.95 6.80 0.00 0.00 72681.20 8053.82 63167.23 00:15:03.881 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x0 length 0x20000 00:15:03.881 nvme3n1 : 5.08 1840.86 7.19 0.00 0.00 68585.04 7895.90 61903.88 00:15:03.881 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:03.881 Verification LBA range: start 0x20000 length 0x20000 00:15:03.881 nvme3n1 : 5.07 1741.40 6.80 0.00 0.00 72508.82 7316.87 69062.84 00:15:03.881 [2024-11-15T10:58:50.743Z] =================================================================================================================== 00:15:03.882 [2024-11-15T10:58:50.743Z] Total : 23533.84 91.93 0.00 0.00 64699.15 5500.81 72431.76 00:15:04.819 00:15:04.819 real 0m7.143s 00:15:04.819 user 0m10.878s 00:15:04.819 sys 0m2.065s 00:15:04.819 10:58:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:04.819 10:58:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:04.819 ************************************ 00:15:04.819 END TEST bdev_verify 00:15:04.819 ************************************ 00:15:05.078 10:58:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:05.078 10:58:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:05.078 10:58:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.078 10:58:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:05.078 ************************************ 00:15:05.078 START TEST bdev_verify_big_io 00:15:05.078 ************************************ 00:15:05.078 10:58:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:05.078 [2024-11-15 10:58:51.797621] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:05.078 [2024-11-15 10:58:51.797754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71859 ] 00:15:05.338 [2024-11-15 10:58:51.968692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:05.338 [2024-11-15 10:58:52.083630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.338 [2024-11-15 10:58:52.083666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.907 Running I/O for 5 seconds... 00:15:11.201 1888.00 IOPS, 118.00 MiB/s [2024-11-15T10:58:58.630Z] 3374.00 IOPS, 210.88 MiB/s [2024-11-15T10:58:58.630Z] 4030.67 IOPS, 251.92 MiB/s 00:15:11.769 Latency(us) 00:15:11.769 [2024-11-15T10:58:58.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.769 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x0 length 0xa000 00:15:11.769 nvme0n1 : 5.54 173.25 10.83 0.00 0.00 711613.49 150759.12 815278.37 00:15:11.769 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0xa000 length 0xa000 00:15:11.769 nvme0n1 : 5.53 148.74 9.30 0.00 0.00 831456.45 94750.84 1482324.31 00:15:11.769 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x0 length 0xbd0b 00:15:11.769 nvme1n1 : 5.67 203.22 12.70 0.00 0.00 591303.75 56008.28 997199.99 00:15:11.769 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:11.769 nvme1n1 : 5.53 164.97 10.31 0.00 0.00 731835.55 10738.43 1017413.50 00:15:11.769 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x0 length 0x8000 00:15:11.769 nvme2n1 : 5.55 207.67 12.98 0.00 0.00 568009.01 73695.10 700735.13 00:15:11.769 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x8000 length 0x8000 00:15:11.769 nvme2n1 : 5.59 206.08 12.88 0.00 0.00 571418.32 60219.42 754637.83 00:15:11.769 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x0 length 0x8000 00:15:11.769 nvme2n2 : 5.78 152.20 9.51 0.00 0.00 765916.29 30530.83 1192597.28 00:15:11.769 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x8000 length 0x8000 00:15:11.769 nvme2n2 : 5.74 197.84 12.36 0.00 0.00 583267.16 38532.01 724317.56 00:15:11.769 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x0 length 0x8000 00:15:11.769 nvme2n3 : 5.78 153.41 9.59 0.00 0.00 736802.71 79169.59 1516013.49 00:15:11.769 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x8000 length 0x8000 00:15:11.769 nvme2n3 : 5.74 153.21 9.58 0.00 0.00 734883.17 19160.73 1886594.57 00:15:11.769 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x0 length 0x2000 00:15:11.769 nvme3n1 : 5.79 196.28 12.27 0.00 0.00 566704.73 3632.12 1738362.14 00:15:11.769 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:11.769 Verification LBA range: start 0x2000 length 0x2000 00:15:11.769 nvme3n1 : 5.76 177.91 11.12 0.00 0.00 624570.40 5106.02 700735.13 00:15:11.769 [2024-11-15T10:58:58.630Z] =================================================================================================================== 00:15:11.769 [2024-11-15T10:58:58.630Z] Total : 2134.76 133.42 0.00 0.00 657234.64 3632.12 1886594.57 00:15:13.148 00:15:13.148 real 0m8.109s 00:15:13.148 user 0m14.589s 00:15:13.148 sys 0m0.675s 00:15:13.148 10:58:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.148 10:58:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.148 ************************************ 00:15:13.148 END TEST bdev_verify_big_io 00:15:13.148 ************************************ 00:15:13.148 10:58:59 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:13.148 10:58:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:13.148 10:58:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.148 10:58:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.148 ************************************ 00:15:13.148 START TEST bdev_write_zeroes 00:15:13.148 ************************************ 00:15:13.148 10:58:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:13.148 [2024-11-15 10:58:59.986045] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:13.148 [2024-11-15 10:58:59.986170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71964 ] 00:15:13.408 [2024-11-15 10:59:00.165799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.667 [2024-11-15 10:59:00.282584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.925 Running I/O for 1 seconds... 00:15:15.384 58944.00 IOPS, 230.25 MiB/s 00:15:15.384 Latency(us) 00:15:15.384 [2024-11-15T10:59:02.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.384 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:15.384 nvme0n1 : 1.02 9416.43 36.78 0.00 0.00 13579.87 6948.40 29478.04 00:15:15.384 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:15.384 nvme1n1 : 1.02 12135.93 47.41 0.00 0.00 10530.03 4500.67 23687.71 00:15:15.384 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:15.384 nvme2n1 : 1.02 9407.26 36.75 0.00 0.00 13517.51 5369.21 29267.48 00:15:15.384 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:15.384 nvme2n2 : 1.02 9399.00 36.71 0.00 0.00 13513.15 4974.42 29267.48 00:15:15.384 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:15.384 nvme2n3 : 1.02 9390.82 36.68 0.00 0.00 13518.19 5053.38 29478.04 00:15:15.384 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:15.384 nvme3n1 : 1.02 9382.61 36.65 0.00 0.00 13520.66 5106.02 29478.04 00:15:15.384 [2024-11-15T10:59:02.245Z] =================================================================================================================== 00:15:15.384 [2024-11-15T10:59:02.245Z] Total : 59132.06 230.98 0.00 0.00 12915.91 4500.67 29478.04 00:15:16.319 00:15:16.319 real 0m2.980s 00:15:16.319 user 0m2.194s 00:15:16.319 sys 0m0.608s 00:15:16.319 10:59:02 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.319 10:59:02 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:16.319 ************************************ 00:15:16.319 END TEST bdev_write_zeroes 00:15:16.319 ************************************ 00:15:16.319 10:59:02 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.319 10:59:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:16.319 10:59:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.319 10:59:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.319 ************************************ 00:15:16.319 START TEST bdev_json_nonenclosed 00:15:16.319 ************************************ 00:15:16.319 10:59:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.319 [2024-11-15 10:59:03.040927] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:16.319 [2024-11-15 10:59:03.041038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72025 ] 00:15:16.578 [2024-11-15 10:59:03.223272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.578 [2024-11-15 10:59:03.338284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.578 [2024-11-15 10:59:03.338384] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:16.578 [2024-11-15 10:59:03.338407] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:16.578 [2024-11-15 10:59:03.338419] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:16.837 00:15:16.837 real 0m0.649s 00:15:16.837 user 0m0.396s 00:15:16.837 sys 0m0.148s 00:15:16.837 ************************************ 00:15:16.837 END TEST bdev_json_nonenclosed 00:15:16.837 ************************************ 00:15:16.837 10:59:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.837 10:59:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:16.837 10:59:03 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:16.837 10:59:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:15:16.837 10:59:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.837 10:59:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:16.837 ************************************ 00:15:16.837 START TEST bdev_json_nonarray 00:15:16.837 ************************************ 00:15:16.837 10:59:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:17.095 [2024-11-15 10:59:03.762308] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:17.095 [2024-11-15 10:59:03.762427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72055 ] 00:15:17.095 [2024-11-15 10:59:03.944481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.352 [2024-11-15 10:59:04.054758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.352 [2024-11-15 10:59:04.055049] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:17.352 [2024-11-15 10:59:04.055080] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:17.352 [2024-11-15 10:59:04.055093] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:17.611 00:15:17.611 real 0m0.645s 00:15:17.611 user 0m0.390s 00:15:17.611 sys 0m0.150s 00:15:17.611 10:59:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:17.611 ************************************ 00:15:17.611 END TEST bdev_json_nonarray 00:15:17.611 ************************************ 00:15:17.611 10:59:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:17.611 10:59:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:18.544 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:36.633 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.633 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.633 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.633 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:36.633 00:15:36.633 real 1m15.799s 00:15:36.633 user 1m41.985s 00:15:36.633 sys 0m50.701s 00:15:36.633 10:59:21 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.633 10:59:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.633 ************************************ 00:15:36.633 END TEST blockdev_xnvme 00:15:36.633 ************************************ 00:15:36.633 10:59:21 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:36.633 10:59:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:36.633 10:59:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.633 10:59:21 -- common/autotest_common.sh@10 -- # set +x 00:15:36.633 ************************************ 00:15:36.633 START TEST ublk 00:15:36.633 ************************************ 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:36.633 * Looking for test storage... 00:15:36.633 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.633 10:59:21 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.633 10:59:21 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.633 10:59:21 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.633 10:59:21 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.633 10:59:21 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.633 10:59:21 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:36.633 10:59:21 ublk -- scripts/common.sh@345 -- # : 1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.633 10:59:21 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.633 10:59:21 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@353 -- # local d=1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.633 10:59:21 ublk -- scripts/common.sh@355 -- # echo 1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.633 10:59:21 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@353 -- # local d=2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.633 10:59:21 ublk -- scripts/common.sh@355 -- # echo 2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.633 10:59:21 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.633 10:59:21 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.633 10:59:21 ublk -- scripts/common.sh@368 -- # return 0 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.633 --rc genhtml_branch_coverage=1 00:15:36.633 --rc genhtml_function_coverage=1 00:15:36.633 --rc genhtml_legend=1 00:15:36.633 --rc geninfo_all_blocks=1 00:15:36.633 --rc geninfo_unexecuted_blocks=1 00:15:36.633 00:15:36.633 ' 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.633 --rc genhtml_branch_coverage=1 00:15:36.633 --rc genhtml_function_coverage=1 00:15:36.633 --rc genhtml_legend=1 00:15:36.633 --rc geninfo_all_blocks=1 00:15:36.633 --rc geninfo_unexecuted_blocks=1 00:15:36.633 00:15:36.633 ' 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.633 --rc genhtml_branch_coverage=1 00:15:36.633 --rc genhtml_function_coverage=1 00:15:36.633 --rc genhtml_legend=1 00:15:36.633 --rc geninfo_all_blocks=1 00:15:36.633 --rc geninfo_unexecuted_blocks=1 00:15:36.633 00:15:36.633 ' 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:36.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.633 --rc genhtml_branch_coverage=1 00:15:36.633 --rc genhtml_function_coverage=1 00:15:36.633 --rc genhtml_legend=1 00:15:36.633 --rc geninfo_all_blocks=1 00:15:36.633 --rc geninfo_unexecuted_blocks=1 00:15:36.633 00:15:36.633 ' 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:36.633 10:59:21 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:36.633 10:59:21 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:36.633 10:59:21 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:36.633 10:59:21 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:36.633 10:59:21 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:36.633 10:59:21 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:36.633 10:59:21 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:36.633 10:59:21 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:36.633 10:59:21 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.633 10:59:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:36.633 ************************************ 00:15:36.633 START TEST test_save_ublk_config 00:15:36.633 ************************************ 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:36.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72367 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72367 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 72367 ']' 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.633 10:59:21 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.633 [2024-11-15 10:59:21.585004] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:36.633 [2024-11-15 10:59:21.585125] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72367 ] 00:15:36.633 [2024-11-15 10:59:21.768431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.633 [2024-11-15 10:59:21.881305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.633 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.634 [2024-11-15 10:59:22.781557] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:36.634 [2024-11-15 10:59:22.782676] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:36.634 malloc0 00:15:36.634 [2024-11-15 10:59:22.869701] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:36.634 [2024-11-15 10:59:22.869797] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:36.634 [2024-11-15 10:59:22.869811] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:36.634 [2024-11-15 10:59:22.869820] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:36.634 [2024-11-15 10:59:22.878652] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:36.634 [2024-11-15 10:59:22.878678] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:36.634 [2024-11-15 10:59:22.885558] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:36.634 [2024-11-15 10:59:22.885658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:36.634 [2024-11-15 10:59:22.902562] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:36.634 0 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.634 10:59:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.634 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.634 10:59:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:36.634 "subsystems": [ 00:15:36.634 { 00:15:36.634 "subsystem": "fsdev", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "fsdev_set_opts", 00:15:36.634 "params": { 00:15:36.634 "fsdev_io_pool_size": 65535, 00:15:36.634 "fsdev_io_cache_size": 256 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "keyring", 00:15:36.634 "config": [] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "iobuf", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "iobuf_set_options", 00:15:36.634 "params": { 00:15:36.634 "small_pool_count": 8192, 00:15:36.634 "large_pool_count": 1024, 00:15:36.634 "small_bufsize": 8192, 00:15:36.634 "large_bufsize": 135168, 00:15:36.634 "enable_numa": false 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "sock", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "sock_set_default_impl", 00:15:36.634 "params": { 00:15:36.634 "impl_name": "posix" 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "sock_impl_set_options", 00:15:36.634 "params": { 00:15:36.634 "impl_name": "ssl", 00:15:36.634 "recv_buf_size": 4096, 00:15:36.634 "send_buf_size": 4096, 00:15:36.634 "enable_recv_pipe": true, 00:15:36.634 "enable_quickack": false, 00:15:36.634 "enable_placement_id": 0, 00:15:36.634 "enable_zerocopy_send_server": true, 00:15:36.634 "enable_zerocopy_send_client": false, 00:15:36.634 "zerocopy_threshold": 0, 00:15:36.634 "tls_version": 0, 00:15:36.634 "enable_ktls": false 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "sock_impl_set_options", 00:15:36.634 "params": { 00:15:36.634 "impl_name": "posix", 00:15:36.634 "recv_buf_size": 2097152, 00:15:36.634 "send_buf_size": 2097152, 00:15:36.634 "enable_recv_pipe": true, 00:15:36.634 "enable_quickack": false, 00:15:36.634 "enable_placement_id": 0, 00:15:36.634 "enable_zerocopy_send_server": true, 00:15:36.634 "enable_zerocopy_send_client": false, 00:15:36.634 "zerocopy_threshold": 0, 00:15:36.634 "tls_version": 0, 00:15:36.634 "enable_ktls": false 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "vmd", 00:15:36.634 "config": [] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "accel", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "accel_set_options", 00:15:36.634 "params": { 00:15:36.634 "small_cache_size": 128, 00:15:36.634 "large_cache_size": 16, 00:15:36.634 "task_count": 2048, 00:15:36.634 "sequence_count": 2048, 00:15:36.634 "buf_count": 2048 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "bdev", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "bdev_set_options", 00:15:36.634 "params": { 00:15:36.634 "bdev_io_pool_size": 65535, 00:15:36.634 "bdev_io_cache_size": 256, 00:15:36.634 "bdev_auto_examine": true, 00:15:36.634 "iobuf_small_cache_size": 128, 00:15:36.634 "iobuf_large_cache_size": 16 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "bdev_raid_set_options", 00:15:36.634 "params": { 00:15:36.634 "process_window_size_kb": 1024, 00:15:36.634 "process_max_bandwidth_mb_sec": 0 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "bdev_iscsi_set_options", 00:15:36.634 "params": { 00:15:36.634 "timeout_sec": 30 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "bdev_nvme_set_options", 00:15:36.634 "params": { 00:15:36.634 "action_on_timeout": "none", 00:15:36.634 "timeout_us": 0, 00:15:36.634 "timeout_admin_us": 0, 00:15:36.634 "keep_alive_timeout_ms": 10000, 00:15:36.634 "arbitration_burst": 0, 00:15:36.634 "low_priority_weight": 0, 00:15:36.634 "medium_priority_weight": 0, 00:15:36.634 "high_priority_weight": 0, 00:15:36.634 "nvme_adminq_poll_period_us": 10000, 00:15:36.634 "nvme_ioq_poll_period_us": 0, 00:15:36.634 "io_queue_requests": 0, 00:15:36.634 "delay_cmd_submit": true, 00:15:36.634 "transport_retry_count": 4, 00:15:36.634 "bdev_retry_count": 3, 00:15:36.634 "transport_ack_timeout": 0, 00:15:36.634 "ctrlr_loss_timeout_sec": 0, 00:15:36.634 "reconnect_delay_sec": 0, 00:15:36.634 "fast_io_fail_timeout_sec": 0, 00:15:36.634 "disable_auto_failback": false, 00:15:36.634 "generate_uuids": false, 00:15:36.634 "transport_tos": 0, 00:15:36.634 "nvme_error_stat": false, 00:15:36.634 "rdma_srq_size": 0, 00:15:36.634 "io_path_stat": false, 00:15:36.634 "allow_accel_sequence": false, 00:15:36.634 "rdma_max_cq_size": 0, 00:15:36.634 "rdma_cm_event_timeout_ms": 0, 00:15:36.634 "dhchap_digests": [ 00:15:36.634 "sha256", 00:15:36.634 "sha384", 00:15:36.634 "sha512" 00:15:36.634 ], 00:15:36.634 "dhchap_dhgroups": [ 00:15:36.634 "null", 00:15:36.634 "ffdhe2048", 00:15:36.634 "ffdhe3072", 00:15:36.634 "ffdhe4096", 00:15:36.634 "ffdhe6144", 00:15:36.634 "ffdhe8192" 00:15:36.634 ] 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "bdev_nvme_set_hotplug", 00:15:36.634 "params": { 00:15:36.634 "period_us": 100000, 00:15:36.634 "enable": false 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "bdev_malloc_create", 00:15:36.634 "params": { 00:15:36.634 "name": "malloc0", 00:15:36.634 "num_blocks": 8192, 00:15:36.634 "block_size": 4096, 00:15:36.634 "physical_block_size": 4096, 00:15:36.634 "uuid": "8b408bec-120d-44f4-8866-8d11deba39b5", 00:15:36.634 "optimal_io_boundary": 0, 00:15:36.634 "md_size": 0, 00:15:36.634 "dif_type": 0, 00:15:36.634 "dif_is_head_of_md": false, 00:15:36.634 "dif_pi_format": 0 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "bdev_wait_for_examine" 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "scsi", 00:15:36.634 "config": null 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "scheduler", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "framework_set_scheduler", 00:15:36.634 "params": { 00:15:36.634 "name": "static" 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "vhost_scsi", 00:15:36.634 "config": [] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "vhost_blk", 00:15:36.634 "config": [] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "ublk", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "ublk_create_target", 00:15:36.634 "params": { 00:15:36.634 "cpumask": "1" 00:15:36.634 } 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "method": "ublk_start_disk", 00:15:36.634 "params": { 00:15:36.634 "bdev_name": "malloc0", 00:15:36.634 "ublk_id": 0, 00:15:36.634 "num_queues": 1, 00:15:36.634 "queue_depth": 128 00:15:36.634 } 00:15:36.634 } 00:15:36.634 ] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "nbd", 00:15:36.634 "config": [] 00:15:36.634 }, 00:15:36.634 { 00:15:36.634 "subsystem": "nvmf", 00:15:36.634 "config": [ 00:15:36.634 { 00:15:36.634 "method": "nvmf_set_config", 00:15:36.635 "params": { 00:15:36.635 "discovery_filter": "match_any", 00:15:36.635 "admin_cmd_passthru": { 00:15:36.635 "identify_ctrlr": false 00:15:36.635 }, 00:15:36.635 "dhchap_digests": [ 00:15:36.635 "sha256", 00:15:36.635 "sha384", 00:15:36.635 "sha512" 00:15:36.635 ], 00:15:36.635 "dhchap_dhgroups": [ 00:15:36.635 "null", 00:15:36.635 "ffdhe2048", 00:15:36.635 "ffdhe3072", 00:15:36.635 "ffdhe4096", 00:15:36.635 "ffdhe6144", 00:15:36.635 "ffdhe8192" 00:15:36.635 ] 00:15:36.635 } 00:15:36.635 }, 00:15:36.635 { 00:15:36.635 "method": "nvmf_set_max_subsystems", 00:15:36.635 "params": { 00:15:36.635 "max_subsystems": 1024 00:15:36.635 } 00:15:36.635 }, 00:15:36.635 { 00:15:36.635 "method": "nvmf_set_crdt", 00:15:36.635 "params": { 00:15:36.635 "crdt1": 0, 00:15:36.635 "crdt2": 0, 00:15:36.635 "crdt3": 0 00:15:36.635 } 00:15:36.635 } 00:15:36.635 ] 00:15:36.635 }, 00:15:36.635 { 00:15:36.635 "subsystem": "iscsi", 00:15:36.635 "config": [ 00:15:36.635 { 00:15:36.635 "method": "iscsi_set_options", 00:15:36.635 "params": { 00:15:36.635 "node_base": "iqn.2016-06.io.spdk", 00:15:36.635 "max_sessions": 128, 00:15:36.635 "max_connections_per_session": 2, 00:15:36.635 "max_queue_depth": 64, 00:15:36.635 "default_time2wait": 2, 00:15:36.635 "default_time2retain": 20, 00:15:36.635 "first_burst_length": 8192, 00:15:36.635 "immediate_data": true, 00:15:36.635 "allow_duplicated_isid": false, 00:15:36.635 "error_recovery_level": 0, 00:15:36.635 "nop_timeout": 60, 00:15:36.635 "nop_in_interval": 30, 00:15:36.635 "disable_chap": false, 00:15:36.635 "require_chap": false, 00:15:36.635 "mutual_chap": false, 00:15:36.635 "chap_group": 0, 00:15:36.635 "max_large_datain_per_connection": 64, 00:15:36.635 "max_r2t_per_connection": 4, 00:15:36.635 "pdu_pool_size": 36864, 00:15:36.635 "immediate_data_pool_size": 16384, 00:15:36.635 "data_out_pool_size": 2048 00:15:36.635 } 00:15:36.635 } 00:15:36.635 ] 00:15:36.635 } 00:15:36.635 ] 00:15:36.635 }' 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72367 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 72367 ']' 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 72367 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72367 00:15:36.635 killing process with pid 72367 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72367' 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 72367 00:15:36.635 10:59:23 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 72367 00:15:38.035 [2024-11-15 10:59:24.670074] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:38.035 [2024-11-15 10:59:24.709562] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:38.035 [2024-11-15 10:59:24.709713] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:38.035 [2024-11-15 10:59:24.717565] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:38.035 [2024-11-15 10:59:24.717615] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:38.035 [2024-11-15 10:59:24.717632] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:38.035 [2024-11-15 10:59:24.717657] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:38.035 [2024-11-15 10:59:24.717797] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:39.948 10:59:26 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72433 00:15:39.948 10:59:26 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72433 00:15:39.948 10:59:26 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 72433 ']' 00:15:39.948 10:59:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:39.948 10:59:26 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.948 10:59:26 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:39.948 "subsystems": [ 00:15:39.948 { 00:15:39.948 "subsystem": "fsdev", 00:15:39.948 "config": [ 00:15:39.948 { 00:15:39.948 "method": "fsdev_set_opts", 00:15:39.948 "params": { 00:15:39.948 "fsdev_io_pool_size": 65535, 00:15:39.948 "fsdev_io_cache_size": 256 00:15:39.948 } 00:15:39.948 } 00:15:39.948 ] 00:15:39.948 }, 00:15:39.948 { 00:15:39.948 "subsystem": "keyring", 00:15:39.948 "config": [] 00:15:39.948 }, 00:15:39.948 { 00:15:39.948 "subsystem": "iobuf", 00:15:39.948 "config": [ 00:15:39.948 { 00:15:39.948 "method": "iobuf_set_options", 00:15:39.948 "params": { 00:15:39.948 "small_pool_count": 8192, 00:15:39.948 "large_pool_count": 1024, 00:15:39.948 "small_bufsize": 8192, 00:15:39.948 "large_bufsize": 135168, 00:15:39.948 "enable_numa": false 00:15:39.948 } 00:15:39.948 } 00:15:39.948 ] 00:15:39.948 }, 00:15:39.948 { 00:15:39.948 "subsystem": "sock", 00:15:39.948 "config": [ 00:15:39.948 { 00:15:39.948 "method": "sock_set_default_impl", 00:15:39.948 "params": { 00:15:39.948 "impl_name": "posix" 00:15:39.948 } 00:15:39.948 }, 00:15:39.948 { 00:15:39.948 "method": "sock_impl_set_options", 00:15:39.948 "params": { 00:15:39.948 "impl_name": "ssl", 00:15:39.948 "recv_buf_size": 4096, 00:15:39.948 "send_buf_size": 4096, 00:15:39.948 "enable_recv_pipe": true, 00:15:39.948 "enable_quickack": false, 00:15:39.948 "enable_placement_id": 0, 00:15:39.948 "enable_zerocopy_send_server": true, 00:15:39.948 "enable_zerocopy_send_client": false, 00:15:39.948 "zerocopy_threshold": 0, 00:15:39.948 "tls_version": 0, 00:15:39.948 "enable_ktls": false 00:15:39.948 } 00:15:39.948 }, 00:15:39.948 { 00:15:39.948 "method": "sock_impl_set_options", 00:15:39.948 "params": { 00:15:39.948 "impl_name": "posix", 00:15:39.948 "recv_buf_size": 2097152, 00:15:39.948 "send_buf_size": 2097152, 00:15:39.948 "enable_recv_pipe": true, 00:15:39.948 "enable_quickack": false, 00:15:39.948 "enable_placement_id": 0, 00:15:39.948 "enable_zerocopy_send_server": true, 00:15:39.948 "enable_zerocopy_send_client": false, 00:15:39.948 "zerocopy_threshold": 0, 00:15:39.948 "tls_version": 0, 00:15:39.948 "enable_ktls": false 00:15:39.948 } 00:15:39.948 } 00:15:39.949 ] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "vmd", 00:15:39.949 "config": [] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "accel", 00:15:39.949 "config": [ 00:15:39.949 { 00:15:39.949 "method": "accel_set_options", 00:15:39.949 "params": { 00:15:39.949 "small_cache_size": 128, 00:15:39.949 "large_cache_size": 16, 00:15:39.949 "task_count": 2048, 00:15:39.949 "sequence_count": 2048, 00:15:39.949 "buf_count": 2048 00:15:39.949 } 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "bdev", 00:15:39.949 "config": [ 00:15:39.949 { 00:15:39.949 "method": "bdev_set_options", 00:15:39.949 "params": { 00:15:39.949 "bdev_io_pool_size": 65535, 00:15:39.949 "bdev_io_cache_size": 256, 00:15:39.949 "bdev_auto_examine": true, 00:15:39.949 "iobuf_small_cache_size": 128, 00:15:39.949 "iobuf_large_cache_size": 16 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "bdev_raid_set_options", 00:15:39.949 "params": { 00:15:39.949 "process_window_size_kb": 1024, 00:15:39.949 "process_max_bandwidth_mb_sec": 0 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "bdev_iscsi_set_options", 00:15:39.949 "params": { 00:15:39.949 "timeout_sec": 30 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "bdev_nvme_set_options", 00:15:39.949 "params": { 00:15:39.949 "action_on_timeout": "none", 00:15:39.949 "timeout_us": 0, 00:15:39.949 "timeout_admin_us": 0, 00:15:39.949 "keep_alive_timeout_ms": 10000, 00:15:39.949 "arbitration_burst": 0, 00:15:39.949 "low_priority_weight": 0, 00:15:39.949 "medium_priority_weight": 0, 00:15:39.949 "high_priority_weight": 0, 00:15:39.949 "nvme_adminq_poll_period_us": 10000, 00:15:39.949 "nvme_ioq_poll_period_us": 0, 00:15:39.949 "io_queue_requests": 0, 00:15:39.949 "delay_cmd_submit": true, 00:15:39.949 "transport_retry_count": 4, 00:15:39.949 "bdev_retry_count": 3, 00:15:39.949 "transport_ack_timeout": 0, 00:15:39.949 "ctrlr_loss_timeout_sec": 0, 00:15:39.949 "reconnect_delay_sec": 0, 00:15:39.949 "fast_io_fail_timeout_sec": 0, 00:15:39.949 "disable_auto_failback": false, 00:15:39.949 "generate_uuids": false, 00:15:39.949 "transport_tos": 0, 00:15:39.949 "nvme_error_stat": false, 00:15:39.949 "rdma_srq_size": 0, 00:15:39.949 "io_path_stat": false, 00:15:39.949 "allow_accel_sequence": false, 00:15:39.949 "rdma_max_cq_size": 0, 00:15:39.949 "rdma_cm_event_timeout_ms": 0, 00:15:39.949 "dhchap_digests": [ 00:15:39.949 "sha256", 00:15:39.949 "sha384", 00:15:39.949 "sha512" 00:15:39.949 ], 00:15:39.949 "dhchap_dhgroups": [ 00:15:39.949 "null", 00:15:39.949 "ffdhe2048", 00:15:39.949 "ffdhe3072", 00:15:39.949 "ffdhe4096", 00:15:39.949 "ffdhe6144", 00:15:39.949 "ffdhe8192" 00:15:39.949 ] 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "bdev_nvme_set_hotplug", 00:15:39.949 "params": { 00:15:39.949 "period_us": 100000, 00:15:39.949 "enable": false 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "bdev_malloc_create", 00:15:39.949 "params": { 00:15:39.949 "name": "malloc0", 00:15:39.949 "num_blocks": 8192, 00:15:39.949 "block_size": 4096, 00:15:39.949 "physical_block_size": 4096, 00:15:39.949 "uuid": "8b408bec-120d-44f4-8866-8d11deba39b5", 00:15:39.949 "optimal_io_boundary": 0, 00:15:39.949 "md_size": 0, 00:15:39.949 "dif_type": 0, 00:15:39.949 "dif_is_head_of_md": false, 00:15:39.949 "dif_pi_format": 0 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "bdev_wait_for_examine" 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "scsi", 00:15:39.949 "config": null 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "scheduler", 00:15:39.949 "config": [ 00:15:39.949 { 00:15:39.949 "method": "framework_set_scheduler", 00:15:39.949 "params": { 00:15:39.949 "name": "static" 00:15:39.949 } 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "vhost_scsi", 00:15:39.949 "config": [] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "vhost_blk", 00:15:39.949 "config": [] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "ublk", 00:15:39.949 "config": [ 00:15:39.949 { 00:15:39.949 "method": "ublk_create_target", 00:15:39.949 "params": { 00:15:39.949 "cpumask": "1" 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "ublk_start_disk", 00:15:39.949 "params": { 00:15:39.949 "bdev_name": "malloc0", 00:15:39.949 "ublk_id": 0, 00:15:39.949 "num_queues": 1, 00:15:39.949 "queue_depth": 128 00:15:39.949 } 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "nbd", 00:15:39.949 "config": [] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "nvmf", 00:15:39.949 "config": [ 00:15:39.949 { 00:15:39.949 "method": "nvmf_set_config", 00:15:39.949 "params": { 00:15:39.949 "discovery_filter": "match_any", 00:15:39.949 "admin_cmd_passthru": { 00:15:39.949 "identify_ctrlr": false 00:15:39.949 }, 00:15:39.949 "dhchap_digests": [ 00:15:39.949 "sha256", 00:15:39.949 "sha384", 00:15:39.949 "sha512" 00:15:39.949 ], 00:15:39.949 "dhchap_dhgroups": [ 00:15:39.949 "null", 00:15:39.949 "ffdhe2048", 00:15:39.949 "ffdhe3072", 00:15:39.949 "ffdhe4096", 00:15:39.949 "ffdhe6144", 00:15:39.949 "ffdhe8192" 00:15:39.949 ] 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "nvmf_set_max_subsystems", 00:15:39.949 "params": { 00:15:39.949 "max_subsystems": 1024 00:15:39.949 } 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "method": "nvmf_set_crdt", 00:15:39.949 "params": { 00:15:39.949 "crdt1": 0, 00:15:39.949 "crdt2": 0, 00:15:39.949 "crdt3": 0 00:15:39.949 } 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 }, 00:15:39.949 { 00:15:39.949 "subsystem": "iscsi", 00:15:39.949 "config": [ 00:15:39.949 { 00:15:39.949 "method": "iscsi_set_options", 00:15:39.949 "params": { 00:15:39.949 "node_base": "iqn.2016-06.io.spdk", 00:15:39.949 "max_sessions": 128, 00:15:39.949 "max_connections_per_session": 2, 00:15:39.949 "max_queue_depth": 64, 00:15:39.949 "default_time2wait": 2, 00:15:39.949 "default_time2retain": 20, 00:15:39.949 "first_burst_length": 8192, 00:15:39.949 "immediate_data": true, 00:15:39.949 "allow_duplicated_isid": false, 00:15:39.949 "error_recovery_level": 0, 00:15:39.949 "nop_timeout": 60, 00:15:39.949 "nop_in_interval": 30, 00:15:39.949 "disable_chap": false, 00:15:39.949 "require_chap": false, 00:15:39.949 "mutual_chap": false, 00:15:39.949 "chap_group": 0, 00:15:39.949 "max_large_datain_per_connection": 64, 00:15:39.949 "max_r2t_per_connection": 4, 00:15:39.949 "pdu_pool_size": 36864, 00:15:39.949 "immediate_data_pool_size": 16384, 00:15:39.949 "data_out_pool_size": 2048 00:15:39.949 } 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 } 00:15:39.949 ] 00:15:39.949 }' 00:15:39.949 10:59:26 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.949 10:59:26 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.949 10:59:26 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.949 10:59:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:39.949 [2024-11-15 10:59:26.658090] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:39.949 [2024-11-15 10:59:26.658217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72433 ] 00:15:40.208 [2024-11-15 10:59:26.839080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:40.208 [2024-11-15 10:59:26.952439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.145 [2024-11-15 10:59:27.975553] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:41.145 [2024-11-15 10:59:27.976819] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:41.145 [2024-11-15 10:59:27.983727] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:41.145 [2024-11-15 10:59:27.983828] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:41.145 [2024-11-15 10:59:27.983842] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:41.145 [2024-11-15 10:59:27.983850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:41.145 [2024-11-15 10:59:27.992632] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:41.145 [2024-11-15 10:59:27.992658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:41.145 [2024-11-15 10:59:27.999568] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:41.145 [2024-11-15 10:59:27.999668] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:41.404 [2024-11-15 10:59:28.016549] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72433 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 72433 ']' 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 72433 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72433 00:15:41.405 killing process with pid 72433 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72433' 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 72433 00:15:41.405 10:59:28 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 72433 00:15:43.311 [2024-11-15 10:59:29.823449] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:43.311 [2024-11-15 10:59:29.854621] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:43.311 [2024-11-15 10:59:29.854744] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:43.311 [2024-11-15 10:59:29.862561] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:43.311 [2024-11-15 10:59:29.862611] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:43.311 [2024-11-15 10:59:29.862620] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:43.311 [2024-11-15 10:59:29.862645] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:43.311 [2024-11-15 10:59:29.862787] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:45.224 10:59:31 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:45.224 ************************************ 00:15:45.224 END TEST test_save_ublk_config 00:15:45.224 ************************************ 00:15:45.224 00:15:45.224 real 0m10.222s 00:15:45.224 user 0m7.751s 00:15:45.224 sys 0m3.196s 00:15:45.224 10:59:31 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.224 10:59:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:45.224 10:59:31 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72524 00:15:45.224 10:59:31 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:45.224 10:59:31 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:45.224 10:59:31 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72524 00:15:45.224 10:59:31 ublk -- common/autotest_common.sh@835 -- # '[' -z 72524 ']' 00:15:45.224 10:59:31 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.224 10:59:31 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.224 10:59:31 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.224 10:59:31 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.224 10:59:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:45.224 [2024-11-15 10:59:31.860499] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:15:45.224 [2024-11-15 10:59:31.860625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72524 ] 00:15:45.224 [2024-11-15 10:59:32.043126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:45.482 [2024-11-15 10:59:32.157315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.482 [2024-11-15 10:59:32.157391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.422 10:59:33 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.422 10:59:33 ublk -- common/autotest_common.sh@868 -- # return 0 00:15:46.422 10:59:33 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:46.422 10:59:33 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:46.422 10:59:33 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:46.422 10:59:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:46.422 ************************************ 00:15:46.422 START TEST test_create_ublk 00:15:46.422 ************************************ 00:15:46.422 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:15:46.422 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:46.422 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.422 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:46.422 [2024-11-15 10:59:33.046548] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:46.422 [2024-11-15 10:59:33.049178] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:46.422 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.422 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:46.422 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:46.422 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.422 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:46.681 [2024-11-15 10:59:33.341734] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:46.681 [2024-11-15 10:59:33.342220] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:46.681 [2024-11-15 10:59:33.342235] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:46.681 [2024-11-15 10:59:33.342244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:46.681 [2024-11-15 10:59:33.349580] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:46.681 [2024-11-15 10:59:33.349604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:46.681 [2024-11-15 10:59:33.357594] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:46.681 [2024-11-15 10:59:33.381606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:46.681 [2024-11-15 10:59:33.403572] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:46.681 10:59:33 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:46.681 { 00:15:46.681 "ublk_device": "/dev/ublkb0", 00:15:46.681 "id": 0, 00:15:46.681 "queue_depth": 512, 00:15:46.681 "num_queues": 4, 00:15:46.681 "bdev_name": "Malloc0" 00:15:46.681 } 00:15:46.681 ]' 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:46.681 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:46.941 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:46.941 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:46.941 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:46.941 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:46.941 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:46.941 10:59:33 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:46.941 10:59:33 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:46.941 fio: verification read phase will never start because write phase uses all of runtime 00:15:46.941 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:46.941 fio-3.35 00:15:46.941 Starting 1 process 00:15:59.160 00:15:59.160 fio_test: (groupid=0, jobs=1): err= 0: pid=72576: Fri Nov 15 10:59:43 2024 00:15:59.160 write: IOPS=10.2k, BW=39.9MiB/s (41.8MB/s)(399MiB/10001msec); 0 zone resets 00:15:59.160 clat (usec): min=48, max=7966, avg=97.01, stdev=153.35 00:15:59.160 lat (usec): min=48, max=8005, avg=97.52, stdev=153.41 00:15:59.160 clat percentiles (usec): 00:15:59.160 | 1.00th=[ 59], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 66], 00:15:59.160 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:15:59.160 | 70.00th=[ 98], 80.00th=[ 101], 90.00th=[ 106], 95.00th=[ 113], 00:15:59.160 | 99.00th=[ 128], 99.50th=[ 151], 99.90th=[ 3359], 99.95th=[ 3621], 00:15:59.160 | 99.99th=[ 3884] 00:15:59.160 bw ( KiB/s): min=19257, max=58752, per=100.00%, avg=41062.37, stdev=9197.36, samples=19 00:15:59.160 iops : min= 4814, max=14688, avg=10265.58, stdev=2299.37, samples=19 00:15:59.160 lat (usec) : 50=0.01%, 100=78.01%, 250=21.63%, 500=0.03%, 750=0.02% 00:15:59.160 lat (usec) : 1000=0.02% 00:15:59.160 lat (msec) : 2=0.08%, 4=0.20%, 10=0.01% 00:15:59.160 cpu : usr=2.26%, sys=8.13%, ctx=102123, majf=0, minf=797 00:15:59.160 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:59.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.160 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:59.160 issued rwts: total=0,102122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:59.160 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:59.160 00:15:59.160 Run status group 0 (all jobs): 00:15:59.160 WRITE: bw=39.9MiB/s (41.8MB/s), 39.9MiB/s-39.9MiB/s (41.8MB/s-41.8MB/s), io=399MiB (418MB), run=10001-10001msec 00:15:59.160 00:15:59.160 Disk stats (read/write): 00:15:59.160 ublkb0: ios=0/101148, merge=0/0, ticks=0/8837, in_queue=8837, util=99.11% 00:15:59.160 10:59:43 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.160 [2024-11-15 10:59:43.902807] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:59.160 [2024-11-15 10:59:43.944658] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:59.160 [2024-11-15 10:59:43.945649] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:59.160 [2024-11-15 10:59:43.952586] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:59.160 [2024-11-15 10:59:43.952955] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:59.160 [2024-11-15 10:59:43.952981] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.160 10:59:43 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.160 [2024-11-15 10:59:43.971690] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:59.160 request: 00:15:59.160 { 00:15:59.160 "ublk_id": 0, 00:15:59.160 "method": "ublk_stop_disk", 00:15:59.160 "req_id": 1 00:15:59.160 } 00:15:59.160 Got JSON-RPC error response 00:15:59.160 response: 00:15:59.160 { 00:15:59.160 "code": -19, 00:15:59.160 "message": "No such device" 00:15:59.160 } 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:59.160 10:59:43 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.160 10:59:43 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.160 [2024-11-15 10:59:43.991692] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:59.160 [2024-11-15 10:59:43.999549] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:59.161 [2024-11-15 10:59:43.999613] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:44 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:44 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:59.161 ************************************ 00:15:59.161 END TEST test_create_ublk 00:15:59.161 ************************************ 00:15:59.161 10:59:44 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:59.161 00:15:59.161 real 0m11.876s 00:15:59.161 user 0m0.597s 00:15:59.161 sys 0m0.966s 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.161 10:59:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 10:59:44 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:59.161 10:59:44 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:59.161 10:59:44 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.161 10:59:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 ************************************ 00:15:59.161 START TEST test_create_multi_ublk 00:15:59.161 ************************************ 00:15:59.161 10:59:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:15:59.161 10:59:44 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:59.161 10:59:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:44 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 [2024-11-15 10:59:44.999547] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:59.161 [2024-11-15 10:59:45.002718] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 [2024-11-15 10:59:45.321756] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:59.161 [2024-11-15 10:59:45.322315] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:59.161 [2024-11-15 10:59:45.322336] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:59.161 [2024-11-15 10:59:45.322354] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:59.161 [2024-11-15 10:59:45.326237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:59.161 [2024-11-15 10:59:45.326277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:59.161 [2024-11-15 10:59:45.336564] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:59.161 [2024-11-15 10:59:45.337205] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:59.161 [2024-11-15 10:59:45.363563] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.161 [2024-11-15 10:59:45.711782] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:59.161 [2024-11-15 10:59:45.712323] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:59.161 [2024-11-15 10:59:45.712348] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:59.161 [2024-11-15 10:59:45.712360] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:59.161 [2024-11-15 10:59:45.720106] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:59.161 [2024-11-15 10:59:45.720141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:59.161 [2024-11-15 10:59:45.727574] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:59.161 [2024-11-15 10:59:45.728223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:59.161 [2024-11-15 10:59:45.736615] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.161 10:59:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.420 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.421 [2024-11-15 10:59:46.089717] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:59.421 [2024-11-15 10:59:46.090265] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:59.421 [2024-11-15 10:59:46.090287] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:59.421 [2024-11-15 10:59:46.090302] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:59.421 [2024-11-15 10:59:46.097617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:59.421 [2024-11-15 10:59:46.097658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:59.421 [2024-11-15 10:59:46.105577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:59.421 [2024-11-15 10:59:46.106274] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:59.421 [2024-11-15 10:59:46.114625] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.421 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.680 [2024-11-15 10:59:46.457756] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:59.680 [2024-11-15 10:59:46.458265] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:59.680 [2024-11-15 10:59:46.458288] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:59.680 [2024-11-15 10:59:46.458298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:59.680 [2024-11-15 10:59:46.465621] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:59.680 [2024-11-15 10:59:46.465648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:59.680 [2024-11-15 10:59:46.473583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:59.680 [2024-11-15 10:59:46.474209] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:59.680 [2024-11-15 10:59:46.482614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:59.680 { 00:15:59.680 "ublk_device": "/dev/ublkb0", 00:15:59.680 "id": 0, 00:15:59.680 "queue_depth": 512, 00:15:59.680 "num_queues": 4, 00:15:59.680 "bdev_name": "Malloc0" 00:15:59.680 }, 00:15:59.680 { 00:15:59.680 "ublk_device": "/dev/ublkb1", 00:15:59.680 "id": 1, 00:15:59.680 "queue_depth": 512, 00:15:59.680 "num_queues": 4, 00:15:59.680 "bdev_name": "Malloc1" 00:15:59.680 }, 00:15:59.680 { 00:15:59.680 "ublk_device": "/dev/ublkb2", 00:15:59.680 "id": 2, 00:15:59.680 "queue_depth": 512, 00:15:59.680 "num_queues": 4, 00:15:59.680 "bdev_name": "Malloc2" 00:15:59.680 }, 00:15:59.680 { 00:15:59.680 "ublk_device": "/dev/ublkb3", 00:15:59.680 "id": 3, 00:15:59.680 "queue_depth": 512, 00:15:59.680 "num_queues": 4, 00:15:59.680 "bdev_name": "Malloc3" 00:15:59.680 } 00:15:59.680 ]' 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:59.680 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:59.939 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:00.199 10:59:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:00.199 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:00.199 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:00.458 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:00.719 [2024-11-15 10:59:47.377784] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:00.719 [2024-11-15 10:59:47.421628] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:00.719 [2024-11-15 10:59:47.422930] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:00.719 [2024-11-15 10:59:47.429585] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:00.719 [2024-11-15 10:59:47.429970] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:00.719 [2024-11-15 10:59:47.429996] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:00.719 [2024-11-15 10:59:47.445697] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:00.719 [2024-11-15 10:59:47.481196] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:00.719 [2024-11-15 10:59:47.482597] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:00.719 [2024-11-15 10:59:47.490633] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:00.719 [2024-11-15 10:59:47.490983] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:00.719 [2024-11-15 10:59:47.491003] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:00.719 [2024-11-15 10:59:47.505710] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:00.719 [2024-11-15 10:59:47.549614] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:00.719 [2024-11-15 10:59:47.550656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:00.719 [2024-11-15 10:59:47.557588] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:00.719 [2024-11-15 10:59:47.557930] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:00.719 [2024-11-15 10:59:47.557953] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.719 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:00.719 [2024-11-15 10:59:47.573688] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:00.978 [2024-11-15 10:59:47.603166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:00.978 [2024-11-15 10:59:47.604264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:00.978 [2024-11-15 10:59:47.613600] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:00.978 [2024-11-15 10:59:47.613952] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:00.978 [2024-11-15 10:59:47.613970] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:00.978 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.979 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:00.979 [2024-11-15 10:59:47.817725] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:00.979 [2024-11-15 10:59:47.825576] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:00.979 [2024-11-15 10:59:47.825635] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:01.238 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:01.238 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:01.238 10:59:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:01.238 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.238 10:59:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:01.804 10:59:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.804 10:59:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:01.804 10:59:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:01.804 10:59:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.804 10:59:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.370 10:59:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.370 10:59:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:02.370 10:59:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:02.370 10:59:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.370 10:59:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:02.629 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.629 10:59:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:02.629 10:59:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:02.629 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.629 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:03.197 ************************************ 00:16:03.197 END TEST test_create_multi_ublk 00:16:03.197 ************************************ 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:03.197 00:16:03.197 real 0m4.908s 00:16:03.197 user 0m1.029s 00:16:03.197 sys 0m0.223s 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.197 10:59:49 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:03.197 10:59:49 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:03.197 10:59:49 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:03.197 10:59:49 ublk -- ublk/ublk.sh@130 -- # killprocess 72524 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@954 -- # '[' -z 72524 ']' 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@958 -- # kill -0 72524 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@959 -- # uname 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72524 00:16:03.197 killing process with pid 72524 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72524' 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@973 -- # kill 72524 00:16:03.197 10:59:49 ublk -- common/autotest_common.sh@978 -- # wait 72524 00:16:04.575 [2024-11-15 10:59:51.223149] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:04.575 [2024-11-15 10:59:51.223236] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:05.953 00:16:05.953 real 0m31.346s 00:16:05.953 user 0m45.065s 00:16:05.953 sys 0m10.198s 00:16:05.953 10:59:52 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:05.953 ************************************ 00:16:05.953 END TEST ublk 00:16:05.953 ************************************ 00:16:05.953 10:59:52 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:05.953 10:59:52 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:05.953 10:59:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:05.953 10:59:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:05.953 10:59:52 -- common/autotest_common.sh@10 -- # set +x 00:16:05.953 ************************************ 00:16:05.953 START TEST ublk_recovery 00:16:05.953 ************************************ 00:16:05.953 10:59:52 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:05.953 * Looking for test storage... 00:16:05.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:05.953 10:59:52 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:05.953 10:59:52 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:16:05.953 10:59:52 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:06.212 10:59:52 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.212 10:59:52 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:06.212 10:59:52 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.212 10:59:52 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.212 --rc genhtml_branch_coverage=1 00:16:06.213 --rc genhtml_function_coverage=1 00:16:06.213 --rc genhtml_legend=1 00:16:06.213 --rc geninfo_all_blocks=1 00:16:06.213 --rc geninfo_unexecuted_blocks=1 00:16:06.213 00:16:06.213 ' 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.213 --rc genhtml_branch_coverage=1 00:16:06.213 --rc genhtml_function_coverage=1 00:16:06.213 --rc genhtml_legend=1 00:16:06.213 --rc geninfo_all_blocks=1 00:16:06.213 --rc geninfo_unexecuted_blocks=1 00:16:06.213 00:16:06.213 ' 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.213 --rc genhtml_branch_coverage=1 00:16:06.213 --rc genhtml_function_coverage=1 00:16:06.213 --rc genhtml_legend=1 00:16:06.213 --rc geninfo_all_blocks=1 00:16:06.213 --rc geninfo_unexecuted_blocks=1 00:16:06.213 00:16:06.213 ' 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:06.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.213 --rc genhtml_branch_coverage=1 00:16:06.213 --rc genhtml_function_coverage=1 00:16:06.213 --rc genhtml_legend=1 00:16:06.213 --rc geninfo_all_blocks=1 00:16:06.213 --rc geninfo_unexecuted_blocks=1 00:16:06.213 00:16:06.213 ' 00:16:06.213 10:59:52 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:06.213 10:59:52 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:06.213 10:59:52 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:06.213 10:59:52 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72958 00:16:06.213 10:59:52 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:06.213 10:59:52 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.213 10:59:52 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72958 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 72958 ']' 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.213 10:59:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:06.213 [2024-11-15 10:59:52.974502] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:16:06.213 [2024-11-15 10:59:52.974683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72958 ] 00:16:06.471 [2024-11-15 10:59:53.160749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:06.472 [2024-11-15 10:59:53.306964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.472 [2024-11-15 10:59:53.307002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.863 10:59:54 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.863 10:59:54 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:07.863 10:59:54 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:07.863 10:59:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.863 10:59:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.863 [2024-11-15 10:59:54.313554] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:07.863 [2024-11-15 10:59:54.316709] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:07.863 10:59:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.864 10:59:54 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:07.864 10:59:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.864 10:59:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.864 malloc0 00:16:07.864 10:59:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.864 10:59:54 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:07.864 10:59:54 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.864 10:59:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:07.864 [2024-11-15 10:59:54.505754] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:07.864 [2024-11-15 10:59:54.505903] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:07.864 [2024-11-15 10:59:54.505920] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:07.864 [2024-11-15 10:59:54.505934] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:07.864 [2024-11-15 10:59:54.514721] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:07.864 [2024-11-15 10:59:54.514757] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:07.864 [2024-11-15 10:59:54.521578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:07.864 [2024-11-15 10:59:54.521779] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:07.864 [2024-11-15 10:59:54.544584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:07.864 1 00:16:07.864 10:59:54 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.864 10:59:54 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:08.798 10:59:55 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=73000 00:16:08.798 10:59:55 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:08.798 10:59:55 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:09.057 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:09.057 fio-3.35 00:16:09.057 Starting 1 process 00:16:14.321 11:00:00 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72958 00:16:14.321 11:00:00 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:19.590 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72958 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:19.590 11:00:05 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:19.590 11:00:05 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73113 00:16:19.590 11:00:05 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:19.590 11:00:05 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73113 00:16:19.590 11:00:05 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73113 ']' 00:16:19.590 11:00:05 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.590 11:00:05 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.590 11:00:05 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.590 11:00:05 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.590 11:00:05 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:19.590 [2024-11-15 11:00:05.705519] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:16:19.590 [2024-11-15 11:00:05.705893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73113 ] 00:16:19.590 [2024-11-15 11:00:05.890563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:19.590 [2024-11-15 11:00:06.031774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.590 [2024-11-15 11:00:06.031813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:20.525 11:00:07 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.525 [2024-11-15 11:00:07.045557] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:20.525 [2024-11-15 11:00:07.048602] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.525 11:00:07 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.525 malloc0 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.525 11:00:07 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.525 [2024-11-15 11:00:07.229786] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:20.525 [2024-11-15 11:00:07.229851] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:20.525 [2024-11-15 11:00:07.229867] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:20.525 [2024-11-15 11:00:07.237622] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:20.525 [2024-11-15 11:00:07.237657] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:20.525 [2024-11-15 11:00:07.237669] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:20.525 [2024-11-15 11:00:07.237789] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:20.525 1 00:16:20.525 11:00:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.525 11:00:07 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 73000 00:16:20.525 [2024-11-15 11:00:07.245583] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:20.525 [2024-11-15 11:00:07.252105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:20.525 [2024-11-15 11:00:07.259795] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:20.525 [2024-11-15 11:00:07.259827] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:16.760 00:17:16.760 fio_test: (groupid=0, jobs=1): err= 0: pid=73003: Fri Nov 15 11:00:55 2024 00:17:16.760 read: IOPS=19.2k, BW=75.0MiB/s (78.6MB/s)(4499MiB/60002msec) 00:17:16.760 slat (usec): min=2, max=2182, avg= 8.63, stdev= 3.97 00:17:16.760 clat (usec): min=1133, max=6705.2k, avg=3239.88, stdev=47140.12 00:17:16.760 lat (usec): min=1141, max=6705.2k, avg=3248.51, stdev=47140.14 00:17:16.760 clat percentiles (usec): 00:17:16.760 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2638], 00:17:16.760 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2835], 60.00th=[ 2868], 00:17:16.760 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3261], 95.00th=[ 4113], 00:17:16.760 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7111], 99.95th=[ 7439], 00:17:16.760 | 99.99th=[ 9503] 00:17:16.760 bw ( KiB/s): min= 2504, max=102352, per=100.00%, avg=85411.17, stdev=10922.38, samples=107 00:17:16.760 iops : min= 626, max=25588, avg=21352.76, stdev=2730.59, samples=107 00:17:16.760 write: IOPS=19.2k, BW=74.9MiB/s (78.6MB/s)(4497MiB/60002msec); 0 zone resets 00:17:16.760 slat (usec): min=2, max=1006, avg= 8.73, stdev= 3.11 00:17:16.760 clat (usec): min=829, max=6704.7k, avg=3411.51, stdev=52621.89 00:17:16.760 lat (usec): min=835, max=6704.7k, avg=3420.24, stdev=52621.91 00:17:16.760 clat percentiles (usec): 00:17:16.760 | 1.00th=[ 2147], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2704], 00:17:16.760 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2999], 00:17:16.760 | 70.00th=[ 3032], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 4113], 00:17:16.760 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 7570], 00:17:16.760 | 99.99th=[ 9634] 00:17:16.760 bw ( KiB/s): min= 2648, max=104256, per=100.00%, avg=85382.29, stdev=10991.27, samples=107 00:17:16.760 iops : min= 662, max=26064, avg=21345.54, stdev=2747.81, samples=107 00:17:16.760 lat (usec) : 1000=0.01% 00:17:16.760 lat (msec) : 2=0.32%, 4=94.16%, 10=5.51%, 20=0.01%, >=2000=0.01% 00:17:16.760 cpu : usr=12.13%, sys=33.58%, ctx=102309, majf=0, minf=13 00:17:16.760 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:16.760 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:16.760 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:16.760 issued rwts: total=1151668,1151123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:16.760 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:16.760 00:17:16.760 Run status group 0 (all jobs): 00:17:16.760 READ: bw=75.0MiB/s (78.6MB/s), 75.0MiB/s-75.0MiB/s (78.6MB/s-78.6MB/s), io=4499MiB (4717MB), run=60002-60002msec 00:17:16.760 WRITE: bw=74.9MiB/s (78.6MB/s), 74.9MiB/s-74.9MiB/s (78.6MB/s-78.6MB/s), io=4497MiB (4715MB), run=60002-60002msec 00:17:16.760 00:17:16.760 Disk stats (read/write): 00:17:16.760 ublkb1: ios=1149282/1148742, merge=0/0, ticks=3607192/3669830, in_queue=7277023, util=99.96% 00:17:16.760 11:00:55 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.760 [2024-11-15 11:00:55.838141] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:16.760 [2024-11-15 11:00:55.883668] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:16.760 [2024-11-15 11:00:55.883853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:16.760 [2024-11-15 11:00:55.891577] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:16.760 [2024-11-15 11:00:55.891699] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:16.760 [2024-11-15 11:00:55.891714] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.760 11:00:55 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.760 [2024-11-15 11:00:55.906675] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:16.760 [2024-11-15 11:00:55.914570] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:16.760 [2024-11-15 11:00:55.914614] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.760 11:00:55 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:16.760 11:00:55 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:16.760 11:00:55 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73113 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 73113 ']' 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 73113 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73113 00:17:16.760 killing process with pid 73113 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73113' 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@973 -- # kill 73113 00:17:16.760 11:00:55 ublk_recovery -- common/autotest_common.sh@978 -- # wait 73113 00:17:16.760 [2024-11-15 11:00:57.608226] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:16.760 [2024-11-15 11:00:57.608282] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:16.760 ************************************ 00:17:16.760 END TEST ublk_recovery 00:17:16.760 ************************************ 00:17:16.760 00:17:16.760 real 1m6.389s 00:17:16.760 user 1m49.978s 00:17:16.760 sys 0m39.091s 00:17:16.760 11:00:59 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:16.760 11:00:59 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.760 11:00:59 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:16.760 11:00:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:16.760 11:00:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:16.760 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:17:16.760 11:00:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:16.760 11:00:59 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:16.760 11:00:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:16.760 11:00:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:16.760 11:00:59 -- common/autotest_common.sh@10 -- # set +x 00:17:16.760 ************************************ 00:17:16.760 START TEST ftl 00:17:16.760 ************************************ 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:16.760 * Looking for test storage... 00:17:16.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.760 11:00:59 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.760 11:00:59 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.760 11:00:59 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.760 11:00:59 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.760 11:00:59 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.760 11:00:59 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:16.760 11:00:59 ftl -- scripts/common.sh@345 -- # : 1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.760 11:00:59 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.760 11:00:59 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@353 -- # local d=1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.760 11:00:59 ftl -- scripts/common.sh@355 -- # echo 1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.760 11:00:59 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@353 -- # local d=2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.760 11:00:59 ftl -- scripts/common.sh@355 -- # echo 2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.760 11:00:59 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.760 11:00:59 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.760 11:00:59 ftl -- scripts/common.sh@368 -- # return 0 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.760 --rc genhtml_branch_coverage=1 00:17:16.760 --rc genhtml_function_coverage=1 00:17:16.760 --rc genhtml_legend=1 00:17:16.760 --rc geninfo_all_blocks=1 00:17:16.760 --rc geninfo_unexecuted_blocks=1 00:17:16.760 00:17:16.760 ' 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.760 --rc genhtml_branch_coverage=1 00:17:16.760 --rc genhtml_function_coverage=1 00:17:16.760 --rc genhtml_legend=1 00:17:16.760 --rc geninfo_all_blocks=1 00:17:16.760 --rc geninfo_unexecuted_blocks=1 00:17:16.760 00:17:16.760 ' 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:16.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.760 --rc genhtml_branch_coverage=1 00:17:16.760 --rc genhtml_function_coverage=1 00:17:16.760 --rc genhtml_legend=1 00:17:16.760 --rc geninfo_all_blocks=1 00:17:16.760 --rc geninfo_unexecuted_blocks=1 00:17:16.760 00:17:16.760 ' 00:17:16.760 11:00:59 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:16.761 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.761 --rc genhtml_branch_coverage=1 00:17:16.761 --rc genhtml_function_coverage=1 00:17:16.761 --rc genhtml_legend=1 00:17:16.761 --rc geninfo_all_blocks=1 00:17:16.761 --rc geninfo_unexecuted_blocks=1 00:17:16.761 00:17:16.761 ' 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:16.761 11:00:59 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:16.761 11:00:59 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.761 11:00:59 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:16.761 11:00:59 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:16.761 11:00:59 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:16.761 11:00:59 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.761 11:00:59 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:16.761 11:00:59 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:16.761 11:00:59 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.761 11:00:59 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.761 11:00:59 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:16.761 11:00:59 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:16.761 11:00:59 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:16.761 11:00:59 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:16.761 11:00:59 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:16.761 11:00:59 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:16.761 11:00:59 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.761 11:00:59 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.761 11:00:59 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:16.761 11:00:59 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:16.761 11:00:59 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:16.761 11:00:59 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:16.761 11:00:59 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:16.761 11:00:59 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:16.761 11:00:59 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:16.761 11:00:59 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:16.761 11:00:59 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:16.761 11:00:59 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:16.761 11:00:59 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:16.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:16.761 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:16.761 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:16.761 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:16.761 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:16.761 11:01:00 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73924 00:17:16.761 11:01:00 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:16.761 11:01:00 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73924 00:17:16.761 11:01:00 ftl -- common/autotest_common.sh@835 -- # '[' -z 73924 ']' 00:17:16.761 11:01:00 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.761 11:01:00 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.761 11:01:00 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.761 11:01:00 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.761 11:01:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:16.761 [2024-11-15 11:01:00.381431] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:16.761 [2024-11-15 11:01:00.381618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73924 ] 00:17:16.761 [2024-11-15 11:01:00.567101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.761 [2024-11-15 11:01:00.687936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.761 11:01:01 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:16.761 11:01:01 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:16.761 11:01:01 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:16.761 11:01:01 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:16.761 11:01:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:16.761 11:01:02 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:16.761 11:01:02 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:16.761 11:01:02 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:16.761 11:01:02 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@50 -- # break 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@63 -- # break 00:17:16.761 11:01:03 ftl -- ftl/ftl.sh@66 -- # killprocess 73924 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 73924 ']' 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@958 -- # kill -0 73924 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@959 -- # uname 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73924 00:17:16.761 killing process with pid 73924 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73924' 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@973 -- # kill 73924 00:17:16.761 11:01:03 ftl -- common/autotest_common.sh@978 -- # wait 73924 00:17:19.297 11:01:05 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:19.297 11:01:05 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:19.297 11:01:05 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:19.297 11:01:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.297 11:01:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:19.297 ************************************ 00:17:19.297 START TEST ftl_fio_basic 00:17:19.297 ************************************ 00:17:19.297 11:01:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:19.297 * Looking for test storage... 00:17:19.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:19.297 11:01:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:19.297 11:01:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:17:19.297 11:01:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.298 --rc genhtml_branch_coverage=1 00:17:19.298 --rc genhtml_function_coverage=1 00:17:19.298 --rc genhtml_legend=1 00:17:19.298 --rc geninfo_all_blocks=1 00:17:19.298 --rc geninfo_unexecuted_blocks=1 00:17:19.298 00:17:19.298 ' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.298 --rc genhtml_branch_coverage=1 00:17:19.298 --rc genhtml_function_coverage=1 00:17:19.298 --rc genhtml_legend=1 00:17:19.298 --rc geninfo_all_blocks=1 00:17:19.298 --rc geninfo_unexecuted_blocks=1 00:17:19.298 00:17:19.298 ' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.298 --rc genhtml_branch_coverage=1 00:17:19.298 --rc genhtml_function_coverage=1 00:17:19.298 --rc genhtml_legend=1 00:17:19.298 --rc geninfo_all_blocks=1 00:17:19.298 --rc geninfo_unexecuted_blocks=1 00:17:19.298 00:17:19.298 ' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:19.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:19.298 --rc genhtml_branch_coverage=1 00:17:19.298 --rc genhtml_function_coverage=1 00:17:19.298 --rc genhtml_legend=1 00:17:19.298 --rc geninfo_all_blocks=1 00:17:19.298 --rc geninfo_unexecuted_blocks=1 00:17:19.298 00:17:19.298 ' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=74068 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 74068 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 74068 ']' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.298 11:01:06 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:19.558 [2024-11-15 11:01:06.206989] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:17:19.558 [2024-11-15 11:01:06.207109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:17:19.558 [2024-11-15 11:01:06.387959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.817 [2024-11-15 11:01:06.508599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.817 [2024-11-15 11:01:06.508722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.817 [2024-11-15 11:01:06.508755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:20.750 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:21.009 11:01:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:21.268 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:21.268 { 00:17:21.268 "name": "nvme0n1", 00:17:21.268 "aliases": [ 00:17:21.268 "6d5ce219-962e-490e-90db-03eff1484129" 00:17:21.268 ], 00:17:21.268 "product_name": "NVMe disk", 00:17:21.268 "block_size": 4096, 00:17:21.268 "num_blocks": 1310720, 00:17:21.268 "uuid": "6d5ce219-962e-490e-90db-03eff1484129", 00:17:21.268 "numa_id": -1, 00:17:21.268 "assigned_rate_limits": { 00:17:21.268 "rw_ios_per_sec": 0, 00:17:21.268 "rw_mbytes_per_sec": 0, 00:17:21.268 "r_mbytes_per_sec": 0, 00:17:21.268 "w_mbytes_per_sec": 0 00:17:21.268 }, 00:17:21.268 "claimed": false, 00:17:21.268 "zoned": false, 00:17:21.268 "supported_io_types": { 00:17:21.268 "read": true, 00:17:21.268 "write": true, 00:17:21.268 "unmap": true, 00:17:21.268 "flush": true, 00:17:21.268 "reset": true, 00:17:21.268 "nvme_admin": true, 00:17:21.268 "nvme_io": true, 00:17:21.268 "nvme_io_md": false, 00:17:21.268 "write_zeroes": true, 00:17:21.268 "zcopy": false, 00:17:21.268 "get_zone_info": false, 00:17:21.268 "zone_management": false, 00:17:21.268 "zone_append": false, 00:17:21.268 "compare": true, 00:17:21.268 "compare_and_write": false, 00:17:21.268 "abort": true, 00:17:21.268 "seek_hole": false, 00:17:21.268 "seek_data": false, 00:17:21.268 "copy": true, 00:17:21.268 "nvme_iov_md": false 00:17:21.268 }, 00:17:21.268 "driver_specific": { 00:17:21.268 "nvme": [ 00:17:21.268 { 00:17:21.268 "pci_address": "0000:00:11.0", 00:17:21.268 "trid": { 00:17:21.268 "trtype": "PCIe", 00:17:21.268 "traddr": "0000:00:11.0" 00:17:21.268 }, 00:17:21.268 "ctrlr_data": { 00:17:21.268 "cntlid": 0, 00:17:21.268 "vendor_id": "0x1b36", 00:17:21.268 "model_number": "QEMU NVMe Ctrl", 00:17:21.268 "serial_number": "12341", 00:17:21.268 "firmware_revision": "8.0.0", 00:17:21.268 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:21.268 "oacs": { 00:17:21.268 "security": 0, 00:17:21.268 "format": 1, 00:17:21.268 "firmware": 0, 00:17:21.268 "ns_manage": 1 00:17:21.268 }, 00:17:21.268 "multi_ctrlr": false, 00:17:21.268 "ana_reporting": false 00:17:21.268 }, 00:17:21.268 "vs": { 00:17:21.268 "nvme_version": "1.4" 00:17:21.268 }, 00:17:21.268 "ns_data": { 00:17:21.268 "id": 1, 00:17:21.268 "can_share": false 00:17:21.268 } 00:17:21.268 } 00:17:21.268 ], 00:17:21.268 "mp_policy": "active_passive" 00:17:21.268 } 00:17:21.268 } 00:17:21.268 ]' 00:17:21.268 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:21.268 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:21.268 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:21.528 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:21.787 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:21.787 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:21.787 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a66a4167-6a0a-47c5-b251-13b20552afe8 00:17:21.787 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a66a4167-6a0a-47c5-b251-13b20552afe8 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:22.047 11:01:08 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.306 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:22.306 { 00:17:22.306 "name": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:22.306 "aliases": [ 00:17:22.306 "lvs/nvme0n1p0" 00:17:22.306 ], 00:17:22.306 "product_name": "Logical Volume", 00:17:22.306 "block_size": 4096, 00:17:22.306 "num_blocks": 26476544, 00:17:22.306 "uuid": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:22.306 "assigned_rate_limits": { 00:17:22.306 "rw_ios_per_sec": 0, 00:17:22.306 "rw_mbytes_per_sec": 0, 00:17:22.306 "r_mbytes_per_sec": 0, 00:17:22.306 "w_mbytes_per_sec": 0 00:17:22.306 }, 00:17:22.306 "claimed": false, 00:17:22.306 "zoned": false, 00:17:22.306 "supported_io_types": { 00:17:22.306 "read": true, 00:17:22.306 "write": true, 00:17:22.306 "unmap": true, 00:17:22.306 "flush": false, 00:17:22.306 "reset": true, 00:17:22.306 "nvme_admin": false, 00:17:22.306 "nvme_io": false, 00:17:22.306 "nvme_io_md": false, 00:17:22.306 "write_zeroes": true, 00:17:22.306 "zcopy": false, 00:17:22.306 "get_zone_info": false, 00:17:22.306 "zone_management": false, 00:17:22.306 "zone_append": false, 00:17:22.306 "compare": false, 00:17:22.306 "compare_and_write": false, 00:17:22.306 "abort": false, 00:17:22.306 "seek_hole": true, 00:17:22.306 "seek_data": true, 00:17:22.306 "copy": false, 00:17:22.306 "nvme_iov_md": false 00:17:22.306 }, 00:17:22.306 "driver_specific": { 00:17:22.306 "lvol": { 00:17:22.306 "lvol_store_uuid": "a66a4167-6a0a-47c5-b251-13b20552afe8", 00:17:22.306 "base_bdev": "nvme0n1", 00:17:22.306 "thin_provision": true, 00:17:22.306 "num_allocated_clusters": 0, 00:17:22.306 "snapshot": false, 00:17:22.306 "clone": false, 00:17:22.306 "esnap_clone": false 00:17:22.306 } 00:17:22.306 } 00:17:22.306 } 00:17:22.306 ]' 00:17:22.306 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:22.306 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:22.306 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:22.565 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:22.565 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:22.565 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:22.565 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:22.565 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:22.565 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:22.825 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:23.084 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:23.084 { 00:17:23.084 "name": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:23.084 "aliases": [ 00:17:23.084 "lvs/nvme0n1p0" 00:17:23.084 ], 00:17:23.084 "product_name": "Logical Volume", 00:17:23.084 "block_size": 4096, 00:17:23.084 "num_blocks": 26476544, 00:17:23.084 "uuid": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:23.084 "assigned_rate_limits": { 00:17:23.084 "rw_ios_per_sec": 0, 00:17:23.084 "rw_mbytes_per_sec": 0, 00:17:23.084 "r_mbytes_per_sec": 0, 00:17:23.084 "w_mbytes_per_sec": 0 00:17:23.084 }, 00:17:23.084 "claimed": false, 00:17:23.084 "zoned": false, 00:17:23.084 "supported_io_types": { 00:17:23.084 "read": true, 00:17:23.084 "write": true, 00:17:23.084 "unmap": true, 00:17:23.084 "flush": false, 00:17:23.084 "reset": true, 00:17:23.084 "nvme_admin": false, 00:17:23.084 "nvme_io": false, 00:17:23.084 "nvme_io_md": false, 00:17:23.084 "write_zeroes": true, 00:17:23.084 "zcopy": false, 00:17:23.084 "get_zone_info": false, 00:17:23.084 "zone_management": false, 00:17:23.084 "zone_append": false, 00:17:23.084 "compare": false, 00:17:23.084 "compare_and_write": false, 00:17:23.084 "abort": false, 00:17:23.084 "seek_hole": true, 00:17:23.084 "seek_data": true, 00:17:23.084 "copy": false, 00:17:23.084 "nvme_iov_md": false 00:17:23.084 }, 00:17:23.084 "driver_specific": { 00:17:23.084 "lvol": { 00:17:23.084 "lvol_store_uuid": "a66a4167-6a0a-47c5-b251-13b20552afe8", 00:17:23.084 "base_bdev": "nvme0n1", 00:17:23.084 "thin_provision": true, 00:17:23.084 "num_allocated_clusters": 0, 00:17:23.084 "snapshot": false, 00:17:23.084 "clone": false, 00:17:23.084 "esnap_clone": false 00:17:23.084 } 00:17:23.084 } 00:17:23.084 } 00:17:23.084 ]' 00:17:23.084 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:23.085 11:01:09 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:23.344 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:23.344 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0a2f0803-835b-469b-bc25-e02f5a62c99c 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:23.604 { 00:17:23.604 "name": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:23.604 "aliases": [ 00:17:23.604 "lvs/nvme0n1p0" 00:17:23.604 ], 00:17:23.604 "product_name": "Logical Volume", 00:17:23.604 "block_size": 4096, 00:17:23.604 "num_blocks": 26476544, 00:17:23.604 "uuid": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:23.604 "assigned_rate_limits": { 00:17:23.604 "rw_ios_per_sec": 0, 00:17:23.604 "rw_mbytes_per_sec": 0, 00:17:23.604 "r_mbytes_per_sec": 0, 00:17:23.604 "w_mbytes_per_sec": 0 00:17:23.604 }, 00:17:23.604 "claimed": false, 00:17:23.604 "zoned": false, 00:17:23.604 "supported_io_types": { 00:17:23.604 "read": true, 00:17:23.604 "write": true, 00:17:23.604 "unmap": true, 00:17:23.604 "flush": false, 00:17:23.604 "reset": true, 00:17:23.604 "nvme_admin": false, 00:17:23.604 "nvme_io": false, 00:17:23.604 "nvme_io_md": false, 00:17:23.604 "write_zeroes": true, 00:17:23.604 "zcopy": false, 00:17:23.604 "get_zone_info": false, 00:17:23.604 "zone_management": false, 00:17:23.604 "zone_append": false, 00:17:23.604 "compare": false, 00:17:23.604 "compare_and_write": false, 00:17:23.604 "abort": false, 00:17:23.604 "seek_hole": true, 00:17:23.604 "seek_data": true, 00:17:23.604 "copy": false, 00:17:23.604 "nvme_iov_md": false 00:17:23.604 }, 00:17:23.604 "driver_specific": { 00:17:23.604 "lvol": { 00:17:23.604 "lvol_store_uuid": "a66a4167-6a0a-47c5-b251-13b20552afe8", 00:17:23.604 "base_bdev": "nvme0n1", 00:17:23.604 "thin_provision": true, 00:17:23.604 "num_allocated_clusters": 0, 00:17:23.604 "snapshot": false, 00:17:23.604 "clone": false, 00:17:23.604 "esnap_clone": false 00:17:23.604 } 00:17:23.604 } 00:17:23.604 } 00:17:23.604 ]' 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:23.604 11:01:10 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0a2f0803-835b-469b-bc25-e02f5a62c99c -c nvc0n1p0 --l2p_dram_limit 60 00:17:23.864 [2024-11-15 11:01:10.561461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.561542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:23.864 [2024-11-15 11:01:10.561581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.864 [2024-11-15 11:01:10.561593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.561692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.561712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:23.864 [2024-11-15 11:01:10.561727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:23.864 [2024-11-15 11:01:10.561738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.561788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:23.864 [2024-11-15 11:01:10.562989] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:23.864 [2024-11-15 11:01:10.563037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.563054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:23.864 [2024-11-15 11:01:10.563075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:17:23.864 [2024-11-15 11:01:10.563090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.563203] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 2177c6dc-bbe0-48ba-8f45-21f485f2fb78 00:17:23.864 [2024-11-15 11:01:10.564693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.564735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:23.864 [2024-11-15 11:01:10.564748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:23.864 [2024-11-15 11:01:10.564761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.572507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.572554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:23.864 [2024-11-15 11:01:10.572568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.688 ms 00:17:23.864 [2024-11-15 11:01:10.572582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.572709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.572727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:23.864 [2024-11-15 11:01:10.572740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:23.864 [2024-11-15 11:01:10.572764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.572857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.572877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:23.864 [2024-11-15 11:01:10.572893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:23.864 [2024-11-15 11:01:10.572910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.572953] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:23.864 [2024-11-15 11:01:10.578553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.578592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:23.864 [2024-11-15 11:01:10.578610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.612 ms 00:17:23.864 [2024-11-15 11:01:10.578636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.578686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.578698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:23.864 [2024-11-15 11:01:10.578712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:23.864 [2024-11-15 11:01:10.578722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.578779] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:23.864 [2024-11-15 11:01:10.578934] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:23.864 [2024-11-15 11:01:10.578958] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:23.864 [2024-11-15 11:01:10.578973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:23.864 [2024-11-15 11:01:10.578989] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579001] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579016] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:23.864 [2024-11-15 11:01:10.579026] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:23.864 [2024-11-15 11:01:10.579039] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:23.864 [2024-11-15 11:01:10.579050] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:23.864 [2024-11-15 11:01:10.579064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.579078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:23.864 [2024-11-15 11:01:10.579099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:23.864 [2024-11-15 11:01:10.579110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.579200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.864 [2024-11-15 11:01:10.579212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:23.864 [2024-11-15 11:01:10.579225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:23.864 [2024-11-15 11:01:10.579235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.864 [2024-11-15 11:01:10.579357] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:23.864 [2024-11-15 11:01:10.579370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:23.864 [2024-11-15 11:01:10.579386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:23.864 [2024-11-15 11:01:10.579421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:23.864 [2024-11-15 11:01:10.579462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.864 [2024-11-15 11:01:10.579497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:23.864 [2024-11-15 11:01:10.579507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:23.864 [2024-11-15 11:01:10.579519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:23.864 [2024-11-15 11:01:10.579543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:23.864 [2024-11-15 11:01:10.579556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:23.864 [2024-11-15 11:01:10.579570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:23.864 [2024-11-15 11:01:10.579596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:23.864 [2024-11-15 11:01:10.579632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:23.864 [2024-11-15 11:01:10.579667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:23.864 [2024-11-15 11:01:10.579682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.864 [2024-11-15 11:01:10.579692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:23.865 [2024-11-15 11:01:10.579707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:23.865 [2024-11-15 11:01:10.579717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.865 [2024-11-15 11:01:10.579732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:23.865 [2024-11-15 11:01:10.579742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:23.865 [2024-11-15 11:01:10.579757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:23.865 [2024-11-15 11:01:10.579766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:23.865 [2024-11-15 11:01:10.579786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:23.865 [2024-11-15 11:01:10.579796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.865 [2024-11-15 11:01:10.579811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:23.865 [2024-11-15 11:01:10.579837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:23.865 [2024-11-15 11:01:10.579850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:23.865 [2024-11-15 11:01:10.579860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:23.865 [2024-11-15 11:01:10.579872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:23.865 [2024-11-15 11:01:10.579882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.865 [2024-11-15 11:01:10.579894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:23.865 [2024-11-15 11:01:10.579903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:23.865 [2024-11-15 11:01:10.579917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.865 [2024-11-15 11:01:10.579927] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:23.865 [2024-11-15 11:01:10.579940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:23.865 [2024-11-15 11:01:10.579951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:23.865 [2024-11-15 11:01:10.579963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:23.865 [2024-11-15 11:01:10.579984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:23.865 [2024-11-15 11:01:10.579999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:23.865 [2024-11-15 11:01:10.580010] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:23.865 [2024-11-15 11:01:10.580022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:23.865 [2024-11-15 11:01:10.580032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:23.865 [2024-11-15 11:01:10.580045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:23.865 [2024-11-15 11:01:10.580060] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:23.865 [2024-11-15 11:01:10.580076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:23.865 [2024-11-15 11:01:10.580101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:23.865 [2024-11-15 11:01:10.580112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:23.865 [2024-11-15 11:01:10.580132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:23.865 [2024-11-15 11:01:10.580145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:23.865 [2024-11-15 11:01:10.580159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:23.865 [2024-11-15 11:01:10.580170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:23.865 [2024-11-15 11:01:10.580187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:23.865 [2024-11-15 11:01:10.580198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:23.865 [2024-11-15 11:01:10.580218] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:23.865 [2024-11-15 11:01:10.580286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:23.865 [2024-11-15 11:01:10.580303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:23.865 [2024-11-15 11:01:10.580337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:23.865 [2024-11-15 11:01:10.580348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:23.865 [2024-11-15 11:01:10.580362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:23.865 [2024-11-15 11:01:10.580374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:23.865 [2024-11-15 11:01:10.580388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:23.865 [2024-11-15 11:01:10.580399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:17:23.865 [2024-11-15 11:01:10.580411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:23.865 [2024-11-15 11:01:10.580504] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:23.865 [2024-11-15 11:01:10.580538] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:30.485 [2024-11-15 11:01:16.952385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:16.952456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:30.485 [2024-11-15 11:01:16.952479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6382.236 ms 00:17:30.485 [2024-11-15 11:01:16.952493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:16.992346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:16.992434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.485 [2024-11-15 11:01:16.992450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.565 ms 00:17:30.485 [2024-11-15 11:01:16.992464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:16.992676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:16.992696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.485 [2024-11-15 11:01:16.992708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:30.485 [2024-11-15 11:01:16.992724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.051180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.051254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.485 [2024-11-15 11:01:17.051279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.464 ms 00:17:30.485 [2024-11-15 11:01:17.051298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.051393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.051411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.485 [2024-11-15 11:01:17.051426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.485 [2024-11-15 11:01:17.051442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.052085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.052123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.485 [2024-11-15 11:01:17.052139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:17:30.485 [2024-11-15 11:01:17.052159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.052340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.052361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.485 [2024-11-15 11:01:17.052375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:17:30.485 [2024-11-15 11:01:17.052395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.074846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.074911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.485 [2024-11-15 11:01:17.074928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.418 ms 00:17:30.485 [2024-11-15 11:01:17.074940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.090012] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.485 [2024-11-15 11:01:17.106888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.106993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.485 [2024-11-15 11:01:17.107013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.801 ms 00:17:30.485 [2024-11-15 11:01:17.107027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.257532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.257605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:30.485 [2024-11-15 11:01:17.257630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 150.642 ms 00:17:30.485 [2024-11-15 11:01:17.257641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.257897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.257915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.485 [2024-11-15 11:01:17.257935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:17:30.485 [2024-11-15 11:01:17.257945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.298934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.299011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:30.485 [2024-11-15 11:01:17.299032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.915 ms 00:17:30.485 [2024-11-15 11:01:17.299044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.339048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.339119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:30.485 [2024-11-15 11:01:17.339140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.961 ms 00:17:30.485 [2024-11-15 11:01:17.339151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.485 [2024-11-15 11:01:17.339999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.485 [2024-11-15 11:01:17.340031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.485 [2024-11-15 11:01:17.340046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:17:30.485 [2024-11-15 11:01:17.340056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.745 [2024-11-15 11:01:17.486719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.745 [2024-11-15 11:01:17.486807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:30.745 [2024-11-15 11:01:17.486831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 146.771 ms 00:17:30.745 [2024-11-15 11:01:17.486847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.745 [2024-11-15 11:01:17.528392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.745 [2024-11-15 11:01:17.528461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:30.745 [2024-11-15 11:01:17.528484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.406 ms 00:17:30.745 [2024-11-15 11:01:17.528496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.745 [2024-11-15 11:01:17.569467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.745 [2024-11-15 11:01:17.569546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:30.745 [2024-11-15 11:01:17.569567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.902 ms 00:17:30.745 [2024-11-15 11:01:17.569578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.004 [2024-11-15 11:01:17.609751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.004 [2024-11-15 11:01:17.609820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.004 [2024-11-15 11:01:17.609839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.130 ms 00:17:31.004 [2024-11-15 11:01:17.609850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.004 [2024-11-15 11:01:17.609964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.004 [2024-11-15 11:01:17.609977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.004 [2024-11-15 11:01:17.609995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:31.004 [2024-11-15 11:01:17.610008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.004 [2024-11-15 11:01:17.610239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.004 [2024-11-15 11:01:17.610255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.004 [2024-11-15 11:01:17.610269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:17:31.004 [2024-11-15 11:01:17.610280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.004 [2024-11-15 11:01:17.611678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 7061.096 ms, result 0 00:17:31.004 { 00:17:31.004 "name": "ftl0", 00:17:31.004 "uuid": "2177c6dc-bbe0-48ba-8f45-21f485f2fb78" 00:17:31.004 } 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.004 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.264 11:01:17 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:31.264 [ 00:17:31.264 { 00:17:31.264 "name": "ftl0", 00:17:31.264 "aliases": [ 00:17:31.264 "2177c6dc-bbe0-48ba-8f45-21f485f2fb78" 00:17:31.264 ], 00:17:31.264 "product_name": "FTL disk", 00:17:31.264 "block_size": 4096, 00:17:31.264 "num_blocks": 20971520, 00:17:31.264 "uuid": "2177c6dc-bbe0-48ba-8f45-21f485f2fb78", 00:17:31.264 "assigned_rate_limits": { 00:17:31.264 "rw_ios_per_sec": 0, 00:17:31.264 "rw_mbytes_per_sec": 0, 00:17:31.264 "r_mbytes_per_sec": 0, 00:17:31.264 "w_mbytes_per_sec": 0 00:17:31.264 }, 00:17:31.264 "claimed": false, 00:17:31.264 "zoned": false, 00:17:31.264 "supported_io_types": { 00:17:31.264 "read": true, 00:17:31.264 "write": true, 00:17:31.264 "unmap": true, 00:17:31.264 "flush": true, 00:17:31.264 "reset": false, 00:17:31.264 "nvme_admin": false, 00:17:31.264 "nvme_io": false, 00:17:31.264 "nvme_io_md": false, 00:17:31.264 "write_zeroes": true, 00:17:31.264 "zcopy": false, 00:17:31.264 "get_zone_info": false, 00:17:31.264 "zone_management": false, 00:17:31.264 "zone_append": false, 00:17:31.264 "compare": false, 00:17:31.264 "compare_and_write": false, 00:17:31.264 "abort": false, 00:17:31.264 "seek_hole": false, 00:17:31.264 "seek_data": false, 00:17:31.264 "copy": false, 00:17:31.264 "nvme_iov_md": false 00:17:31.264 }, 00:17:31.264 "driver_specific": { 00:17:31.264 "ftl": { 00:17:31.264 "base_bdev": "0a2f0803-835b-469b-bc25-e02f5a62c99c", 00:17:31.264 "cache": "nvc0n1p0" 00:17:31.264 } 00:17:31.264 } 00:17:31.264 } 00:17:31.264 ] 00:17:31.264 11:01:18 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:31.264 11:01:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:31.264 11:01:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:31.523 11:01:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:31.523 11:01:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:31.781 [2024-11-15 11:01:18.520736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.520799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.781 [2024-11-15 11:01:18.520817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:31.781 [2024-11-15 11:01:18.520831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.520895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.781 [2024-11-15 11:01:18.525279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.525338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.781 [2024-11-15 11:01:18.525355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.364 ms 00:17:31.781 [2024-11-15 11:01:18.525365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.526241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.526268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.781 [2024-11-15 11:01:18.526284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:17:31.781 [2024-11-15 11:01:18.526294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.528858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.528889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.781 [2024-11-15 11:01:18.528904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:17:31.781 [2024-11-15 11:01:18.528915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.534016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.534054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:31.781 [2024-11-15 11:01:18.534070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.043 ms 00:17:31.781 [2024-11-15 11:01:18.534079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.573477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.573545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.781 [2024-11-15 11:01:18.573565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.305 ms 00:17:31.781 [2024-11-15 11:01:18.573576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.600051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.600123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:31.781 [2024-11-15 11:01:18.600143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.380 ms 00:17:31.781 [2024-11-15 11:01:18.600159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.781 [2024-11-15 11:01:18.600482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.781 [2024-11-15 11:01:18.600497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:31.781 [2024-11-15 11:01:18.600512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:17:31.781 [2024-11-15 11:01:18.600523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.042 [2024-11-15 11:01:18.640807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.042 [2024-11-15 11:01:18.640877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.042 [2024-11-15 11:01:18.640898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.292 ms 00:17:32.042 [2024-11-15 11:01:18.640908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.042 [2024-11-15 11:01:18.679740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.042 [2024-11-15 11:01:18.679827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.042 [2024-11-15 11:01:18.679849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.786 ms 00:17:32.042 [2024-11-15 11:01:18.679875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.042 [2024-11-15 11:01:18.719345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.042 [2024-11-15 11:01:18.719420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.042 [2024-11-15 11:01:18.719440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.420 ms 00:17:32.042 [2024-11-15 11:01:18.719450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.042 [2024-11-15 11:01:18.756648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.042 [2024-11-15 11:01:18.756705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.042 [2024-11-15 11:01:18.756740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.997 ms 00:17:32.042 [2024-11-15 11:01:18.756750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.042 [2024-11-15 11:01:18.756824] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.042 [2024-11-15 11:01:18.756841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.756990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.042 [2024-11-15 11:01:18.757124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.757992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.043 [2024-11-15 11:01:18.758124] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.043 [2024-11-15 11:01:18.758136] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 2177c6dc-bbe0-48ba-8f45-21f485f2fb78 00:17:32.043 [2024-11-15 11:01:18.758147] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.043 [2024-11-15 11:01:18.758162] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.043 [2024-11-15 11:01:18.758172] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.043 [2024-11-15 11:01:18.758189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.043 [2024-11-15 11:01:18.758199] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.043 [2024-11-15 11:01:18.758212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.043 [2024-11-15 11:01:18.758222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.043 [2024-11-15 11:01:18.758234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.043 [2024-11-15 11:01:18.758244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.043 [2024-11-15 11:01:18.758256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.043 [2024-11-15 11:01:18.758266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.043 [2024-11-15 11:01:18.758280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:17:32.043 [2024-11-15 11:01:18.758290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.043 [2024-11-15 11:01:18.778317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.044 [2024-11-15 11:01:18.778356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.044 [2024-11-15 11:01:18.778389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.958 ms 00:17:32.044 [2024-11-15 11:01:18.778400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.044 [2024-11-15 11:01:18.778995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.044 [2024-11-15 11:01:18.779020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.044 [2024-11-15 11:01:18.779035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:17:32.044 [2024-11-15 11:01:18.779045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.044 [2024-11-15 11:01:18.849440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.044 [2024-11-15 11:01:18.849488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.044 [2024-11-15 11:01:18.849511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.044 [2024-11-15 11:01:18.849522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.044 [2024-11-15 11:01:18.849627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.044 [2024-11-15 11:01:18.849639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.044 [2024-11-15 11:01:18.849652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.044 [2024-11-15 11:01:18.849661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.044 [2024-11-15 11:01:18.849823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.044 [2024-11-15 11:01:18.849838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.044 [2024-11-15 11:01:18.849854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.044 [2024-11-15 11:01:18.849864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.044 [2024-11-15 11:01:18.849923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.044 [2024-11-15 11:01:18.849934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.044 [2024-11-15 11:01:18.849947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.044 [2024-11-15 11:01:18.849957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:18.985034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:18.985115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.303 [2024-11-15 11:01:18.985134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:18.985144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.088345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:19.088401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.303 [2024-11-15 11:01:19.088419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:19.088430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.088589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:19.088603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:32.303 [2024-11-15 11:01:19.088617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:19.088631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.088772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:19.088784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:32.303 [2024-11-15 11:01:19.088797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:19.088807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.088969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:19.088987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:32.303 [2024-11-15 11:01:19.089001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:19.089010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.089099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:19.089118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:32.303 [2024-11-15 11:01:19.089132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:19.089142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.089202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.303 [2024-11-15 11:01:19.089213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:32.303 [2024-11-15 11:01:19.089226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.303 [2024-11-15 11:01:19.089236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.303 [2024-11-15 11:01:19.089320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.304 [2024-11-15 11:01:19.089331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:32.304 [2024-11-15 11:01:19.089344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.304 [2024-11-15 11:01:19.089354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.304 [2024-11-15 11:01:19.089618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 569.776 ms, result 0 00:17:32.304 true 00:17:32.304 11:01:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 74068 00:17:32.304 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 74068 ']' 00:17:32.304 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 74068 00:17:32.304 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:32.304 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.304 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74068 00:17:32.563 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.563 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.563 killing process with pid 74068 00:17:32.563 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74068' 00:17:32.563 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 74068 00:17:32.563 11:01:19 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 74068 00:17:37.832 11:01:23 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:37.832 11:01:23 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:37.832 11:01:23 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:37.832 11:01:23 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:37.832 11:01:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:37.832 11:01:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:37.832 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:37.832 fio-3.35 00:17:37.832 Starting 1 thread 00:17:43.105 00:17:43.105 test: (groupid=0, jobs=1): err= 0: pid=74319: Fri Nov 15 11:01:29 2024 00:17:43.105 read: IOPS=948, BW=63.0MiB/s (66.1MB/s)(255MiB/4040msec) 00:17:43.105 slat (nsec): min=4258, max=31954, avg=8815.31, stdev=3951.29 00:17:43.105 clat (usec): min=269, max=944, avg=474.28, stdev=59.20 00:17:43.105 lat (usec): min=280, max=949, avg=483.10, stdev=59.88 00:17:43.105 clat percentiles (usec): 00:17:43.105 | 1.00th=[ 326], 5.00th=[ 388], 10.00th=[ 396], 20.00th=[ 429], 00:17:43.105 | 30.00th=[ 445], 40.00th=[ 457], 50.00th=[ 469], 60.00th=[ 494], 00:17:43.105 | 70.00th=[ 515], 80.00th=[ 523], 90.00th=[ 537], 95.00th=[ 570], 00:17:43.105 | 99.00th=[ 619], 99.50th=[ 652], 99.90th=[ 758], 99.95th=[ 816], 00:17:43.105 | 99.99th=[ 947] 00:17:43.105 write: IOPS=955, BW=63.4MiB/s (66.5MB/s)(256MiB/4036msec); 0 zone resets 00:17:43.105 slat (nsec): min=15729, max=82139, avg=21080.92, stdev=4944.71 00:17:43.105 clat (usec): min=360, max=1049, avg=536.02, stdev=75.51 00:17:43.105 lat (usec): min=383, max=1094, avg=557.10, stdev=75.90 00:17:43.105 clat percentiles (usec): 00:17:43.105 | 1.00th=[ 400], 5.00th=[ 424], 10.00th=[ 457], 20.00th=[ 478], 00:17:43.105 | 30.00th=[ 494], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 545], 00:17:43.105 | 70.00th=[ 553], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 627], 00:17:43.105 | 99.00th=[ 857], 99.50th=[ 922], 99.90th=[ 955], 99.95th=[ 1012], 00:17:43.105 | 99.99th=[ 1057] 00:17:43.105 bw ( KiB/s): min=62152, max=67048, per=99.96%, avg=64940.00, stdev=1958.72, samples=8 00:17:43.105 iops : min= 914, max= 986, avg=955.00, stdev=28.80, samples=8 00:17:43.105 lat (usec) : 500=47.28%, 750=51.74%, 1000=0.96% 00:17:43.105 lat (msec) : 2=0.03% 00:17:43.105 cpu : usr=99.26%, sys=0.02%, ctx=7, majf=0, minf=1169 00:17:43.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.105 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.105 00:17:43.105 Run status group 0 (all jobs): 00:17:43.105 READ: bw=63.0MiB/s (66.1MB/s), 63.0MiB/s-63.0MiB/s (66.1MB/s-66.1MB/s), io=255MiB (267MB), run=4040-4040msec 00:17:43.105 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=256MiB (269MB), run=4036-4036msec 00:17:45.010 ----------------------------------------------------- 00:17:45.010 Suppressions used: 00:17:45.010 count bytes template 00:17:45.010 1 5 /usr/src/fio/parse.c 00:17:45.010 1 8 libtcmalloc_minimal.so 00:17:45.010 1 904 libcrypto.so 00:17:45.010 ----------------------------------------------------- 00:17:45.010 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:17:45.010 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:45.268 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:45.268 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:45.268 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:17:45.268 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:45.268 11:01:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:45.268 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:45.268 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:45.268 fio-3.35 00:17:45.268 Starting 2 threads 00:18:17.345 00:18:17.345 first_half: (groupid=0, jobs=1): err= 0: pid=74433: Fri Nov 15 11:02:02 2024 00:18:17.345 read: IOPS=2254, BW=9020KiB/s (9236kB/s)(255MiB/28913msec) 00:18:17.345 slat (usec): min=3, max=127, avg= 8.86, stdev= 4.21 00:18:17.345 clat (usec): min=848, max=329314, avg=40687.82, stdev=21428.19 00:18:17.345 lat (usec): min=857, max=329319, avg=40696.68, stdev=21428.77 00:18:17.345 clat percentiles (msec): 00:18:17.345 | 1.00th=[ 6], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 35], 00:18:17.345 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 39], 00:18:17.345 | 70.00th=[ 39], 80.00th=[ 42], 90.00th=[ 44], 95.00th=[ 52], 00:18:17.346 | 99.00th=[ 169], 99.50th=[ 199], 99.90th=[ 255], 99.95th=[ 284], 00:18:17.346 | 99.99th=[ 321] 00:18:17.346 write: IOPS=3033, BW=11.8MiB/s (12.4MB/s)(256MiB/21607msec); 0 zone resets 00:18:17.346 slat (usec): min=4, max=827, avg=11.47, stdev=10.21 00:18:17.346 clat (usec): min=455, max=139767, avg=15959.24, stdev=28234.35 00:18:17.346 lat (usec): min=471, max=139774, avg=15970.71, stdev=28234.51 00:18:17.346 clat percentiles (usec): 00:18:17.346 | 1.00th=[ 1188], 5.00th=[ 1516], 10.00th=[ 1745], 20.00th=[ 2024], 00:18:17.346 | 30.00th=[ 2278], 40.00th=[ 2933], 50.00th=[ 5342], 60.00th=[ 7635], 00:18:17.346 | 70.00th=[ 9634], 80.00th=[ 13173], 90.00th=[ 84411], 95.00th=[ 91751], 00:18:17.346 | 99.00th=[106431], 99.50th=[114820], 99.90th=[124257], 99.95th=[126354], 00:18:17.346 | 99.99th=[135267] 00:18:17.346 bw ( KiB/s): min= 888, max=37568, per=90.98%, avg=20164.92, stdev=10243.60, samples=26 00:18:17.346 iops : min= 222, max= 9392, avg=5041.23, stdev=2560.90, samples=26 00:18:17.346 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.10% 00:18:17.346 lat (msec) : 2=9.68%, 4=12.97%, 10=13.67%, 20=8.47%, 50=46.56% 00:18:17.346 lat (msec) : 100=6.40%, 250=2.07%, 500=0.05% 00:18:17.346 cpu : usr=99.17%, sys=0.16%, ctx=42, majf=0, minf=5591 00:18:17.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:17.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.346 complete : 0=0.0%, 4=99.8%, 8=0.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.346 issued rwts: total=65196,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.346 second_half: (groupid=0, jobs=1): err= 0: pid=74434: Fri Nov 15 11:02:02 2024 00:18:17.346 read: IOPS=2239, BW=8960KiB/s (9175kB/s)(255MiB/29108msec) 00:18:17.346 slat (nsec): min=3372, max=86921, avg=9802.20, stdev=4645.53 00:18:17.346 clat (usec): min=1232, max=334862, avg=39604.55, stdev=21065.75 00:18:17.346 lat (usec): min=1241, max=334872, avg=39614.35, stdev=21066.29 00:18:17.346 clat percentiles (msec): 00:18:17.346 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 35], 00:18:17.346 | 30.00th=[ 37], 40.00th=[ 37], 50.00th=[ 38], 60.00th=[ 39], 00:18:17.346 | 70.00th=[ 39], 80.00th=[ 41], 90.00th=[ 44], 95.00th=[ 49], 00:18:17.346 | 99.00th=[ 165], 99.50th=[ 209], 99.90th=[ 241], 99.95th=[ 247], 00:18:17.346 | 99.99th=[ 330] 00:18:17.346 write: IOPS=2770, BW=10.8MiB/s (11.3MB/s)(256MiB/23655msec); 0 zone resets 00:18:17.346 slat (usec): min=4, max=553, avg=11.58, stdev= 6.75 00:18:17.346 clat (usec): min=424, max=139918, avg=17413.99, stdev=28988.13 00:18:17.346 lat (usec): min=437, max=139928, avg=17425.57, stdev=28988.73 00:18:17.346 clat percentiles (usec): 00:18:17.346 | 1.00th=[ 1074], 5.00th=[ 1401], 10.00th=[ 1647], 20.00th=[ 1942], 00:18:17.346 | 30.00th=[ 2212], 40.00th=[ 3195], 50.00th=[ 5866], 60.00th=[ 8356], 00:18:17.346 | 70.00th=[ 11731], 80.00th=[ 14615], 90.00th=[ 84411], 95.00th=[ 92799], 00:18:17.346 | 99.00th=[108528], 99.50th=[115868], 99.90th=[126354], 99.95th=[131597], 00:18:17.346 | 99.99th=[139461] 00:18:17.346 bw ( KiB/s): min= 1064, max=50168, per=76.31%, avg=16914.71, stdev=12019.34, samples=31 00:18:17.346 iops : min= 266, max=12542, avg=4228.68, stdev=3004.83, samples=31 00:18:17.346 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.29% 00:18:17.346 lat (msec) : 2=11.07%, 4=10.18%, 10=12.92%, 20=9.88%, 50=47.11% 00:18:17.346 lat (msec) : 100=6.30%, 250=2.20%, 500=0.02% 00:18:17.346 cpu : usr=99.17%, sys=0.16%, ctx=51, majf=0, minf=5510 00:18:17.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:17.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:17.346 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:17.346 issued rwts: total=65200,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:17.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:17.346 00:18:17.346 Run status group 0 (all jobs): 00:18:17.346 READ: bw=17.5MiB/s (18.3MB/s), 8960KiB/s-9020KiB/s (9175kB/s-9236kB/s), io=509MiB (534MB), run=28913-29108msec 00:18:17.346 WRITE: bw=21.6MiB/s (22.7MB/s), 10.8MiB/s-11.8MiB/s (11.3MB/s-12.4MB/s), io=512MiB (537MB), run=21607-23655msec 00:18:18.727 ----------------------------------------------------- 00:18:18.727 Suppressions used: 00:18:18.727 count bytes template 00:18:18.727 2 10 /usr/src/fio/parse.c 00:18:18.727 1 96 /usr/src/fio/iolog.c 00:18:18.727 1 8 libtcmalloc_minimal.so 00:18:18.727 1 904 libcrypto.so 00:18:18.727 ----------------------------------------------------- 00:18:18.727 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:18.727 11:02:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:18.727 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:18.727 fio-3.35 00:18:18.727 Starting 1 thread 00:18:36.815 00:18:36.815 test: (groupid=0, jobs=1): err= 0: pid=74802: Fri Nov 15 11:02:21 2024 00:18:36.815 read: IOPS=7395, BW=28.9MiB/s (30.3MB/s)(255MiB/8816msec) 00:18:36.815 slat (usec): min=3, max=100, avg= 8.02, stdev= 4.63 00:18:36.815 clat (usec): min=631, max=33977, avg=17294.67, stdev=1512.13 00:18:36.815 lat (usec): min=641, max=33991, avg=17302.70, stdev=1513.52 00:18:36.815 clat percentiles (usec): 00:18:36.815 | 1.00th=[14746], 5.00th=[15139], 10.00th=[15533], 20.00th=[15926], 00:18:36.815 | 30.00th=[16188], 40.00th=[17171], 50.00th=[17695], 60.00th=[17957], 00:18:36.815 | 70.00th=[18220], 80.00th=[18220], 90.00th=[18482], 95.00th=[19006], 00:18:36.815 | 99.00th=[21103], 99.50th=[23462], 99.90th=[28443], 99.95th=[29754], 00:18:36.815 | 99.99th=[33162] 00:18:36.815 write: IOPS=12.4k, BW=48.3MiB/s (50.6MB/s)(256MiB/5304msec); 0 zone resets 00:18:36.815 slat (usec): min=4, max=1777, avg= 9.50, stdev=12.14 00:18:36.815 clat (usec): min=545, max=58897, avg=10303.50, stdev=12652.08 00:18:36.815 lat (usec): min=552, max=58905, avg=10313.00, stdev=12652.12 00:18:36.815 clat percentiles (usec): 00:18:36.815 | 1.00th=[ 922], 5.00th=[ 1123], 10.00th=[ 1270], 20.00th=[ 1500], 00:18:36.815 | 30.00th=[ 1745], 40.00th=[ 2343], 50.00th=[ 6915], 60.00th=[ 8029], 00:18:36.815 | 70.00th=[ 9110], 80.00th=[11076], 90.00th=[35390], 95.00th=[39060], 00:18:36.815 | 99.00th=[46924], 99.50th=[52167], 99.90th=[55837], 99.95th=[56886], 00:18:36.815 | 99.99th=[57934] 00:18:36.815 bw ( KiB/s): min=29368, max=63488, per=96.42%, avg=47652.64, stdev=10165.75, samples=11 00:18:36.815 iops : min= 7342, max=15872, avg=11913.09, stdev=2541.39, samples=11 00:18:36.815 lat (usec) : 750=0.04%, 1000=1.09% 00:18:36.815 lat (msec) : 2=17.26%, 4=2.63%, 10=16.68%, 20=53.43%, 50=8.51% 00:18:36.815 lat (msec) : 100=0.35% 00:18:36.815 cpu : usr=98.51%, sys=0.50%, ctx=25, majf=0, minf=5565 00:18:36.815 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:36.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.815 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:36.815 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:36.816 00:18:36.816 Run status group 0 (all jobs): 00:18:36.816 READ: bw=28.9MiB/s (30.3MB/s), 28.9MiB/s-28.9MiB/s (30.3MB/s-30.3MB/s), io=255MiB (267MB), run=8816-8816msec 00:18:36.816 WRITE: bw=48.3MiB/s (50.6MB/s), 48.3MiB/s-48.3MiB/s (50.6MB/s-50.6MB/s), io=256MiB (268MB), run=5304-5304msec 00:18:36.816 ----------------------------------------------------- 00:18:36.816 Suppressions used: 00:18:36.816 count bytes template 00:18:36.816 1 5 /usr/src/fio/parse.c 00:18:36.816 2 192 /usr/src/fio/iolog.c 00:18:36.816 1 8 libtcmalloc_minimal.so 00:18:36.816 1 904 libcrypto.so 00:18:36.816 ----------------------------------------------------- 00:18:36.816 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:36.816 Remove shared memory files 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57836 /dev/shm/spdk_tgt_trace.pid72958 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:36.816 ************************************ 00:18:36.816 END TEST ftl_fio_basic 00:18:36.816 ************************************ 00:18:36.816 00:18:36.816 real 1m17.489s 00:18:36.816 user 2m52.683s 00:18:36.816 sys 0m4.305s 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.816 11:02:23 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:36.816 11:02:23 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:36.816 11:02:23 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:36.816 11:02:23 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.816 11:02:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:36.816 ************************************ 00:18:36.816 START TEST ftl_bdevperf 00:18:36.816 ************************************ 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:36.816 * Looking for test storage... 00:18:36.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.816 --rc genhtml_branch_coverage=1 00:18:36.816 --rc genhtml_function_coverage=1 00:18:36.816 --rc genhtml_legend=1 00:18:36.816 --rc geninfo_all_blocks=1 00:18:36.816 --rc geninfo_unexecuted_blocks=1 00:18:36.816 00:18:36.816 ' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.816 --rc genhtml_branch_coverage=1 00:18:36.816 --rc genhtml_function_coverage=1 00:18:36.816 --rc genhtml_legend=1 00:18:36.816 --rc geninfo_all_blocks=1 00:18:36.816 --rc geninfo_unexecuted_blocks=1 00:18:36.816 00:18:36.816 ' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.816 --rc genhtml_branch_coverage=1 00:18:36.816 --rc genhtml_function_coverage=1 00:18:36.816 --rc genhtml_legend=1 00:18:36.816 --rc geninfo_all_blocks=1 00:18:36.816 --rc geninfo_unexecuted_blocks=1 00:18:36.816 00:18:36.816 ' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:36.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:36.816 --rc genhtml_branch_coverage=1 00:18:36.816 --rc genhtml_function_coverage=1 00:18:36.816 --rc genhtml_legend=1 00:18:36.816 --rc geninfo_all_blocks=1 00:18:36.816 --rc geninfo_unexecuted_blocks=1 00:18:36.816 00:18:36.816 ' 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.816 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75046 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75046 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75046 ']' 00:18:36.817 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.078 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.078 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.078 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.078 11:02:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:37.078 [2024-11-15 11:02:23.762906] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:18:37.078 [2024-11-15 11:02:23.763226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75046 ] 00:18:37.337 [2024-11-15 11:02:23.946282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.337 [2024-11-15 11:02:24.091259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:37.906 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:38.165 11:02:24 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:38.425 { 00:18:38.425 "name": "nvme0n1", 00:18:38.425 "aliases": [ 00:18:38.425 "9fa31d7c-0d97-41a4-be85-07da753344ce" 00:18:38.425 ], 00:18:38.425 "product_name": "NVMe disk", 00:18:38.425 "block_size": 4096, 00:18:38.425 "num_blocks": 1310720, 00:18:38.425 "uuid": "9fa31d7c-0d97-41a4-be85-07da753344ce", 00:18:38.425 "numa_id": -1, 00:18:38.425 "assigned_rate_limits": { 00:18:38.425 "rw_ios_per_sec": 0, 00:18:38.425 "rw_mbytes_per_sec": 0, 00:18:38.425 "r_mbytes_per_sec": 0, 00:18:38.425 "w_mbytes_per_sec": 0 00:18:38.425 }, 00:18:38.425 "claimed": true, 00:18:38.425 "claim_type": "read_many_write_one", 00:18:38.425 "zoned": false, 00:18:38.425 "supported_io_types": { 00:18:38.425 "read": true, 00:18:38.425 "write": true, 00:18:38.425 "unmap": true, 00:18:38.425 "flush": true, 00:18:38.425 "reset": true, 00:18:38.425 "nvme_admin": true, 00:18:38.425 "nvme_io": true, 00:18:38.425 "nvme_io_md": false, 00:18:38.425 "write_zeroes": true, 00:18:38.425 "zcopy": false, 00:18:38.425 "get_zone_info": false, 00:18:38.425 "zone_management": false, 00:18:38.425 "zone_append": false, 00:18:38.425 "compare": true, 00:18:38.425 "compare_and_write": false, 00:18:38.425 "abort": true, 00:18:38.425 "seek_hole": false, 00:18:38.425 "seek_data": false, 00:18:38.425 "copy": true, 00:18:38.425 "nvme_iov_md": false 00:18:38.425 }, 00:18:38.425 "driver_specific": { 00:18:38.425 "nvme": [ 00:18:38.425 { 00:18:38.425 "pci_address": "0000:00:11.0", 00:18:38.425 "trid": { 00:18:38.425 "trtype": "PCIe", 00:18:38.425 "traddr": "0000:00:11.0" 00:18:38.425 }, 00:18:38.425 "ctrlr_data": { 00:18:38.425 "cntlid": 0, 00:18:38.425 "vendor_id": "0x1b36", 00:18:38.425 "model_number": "QEMU NVMe Ctrl", 00:18:38.425 "serial_number": "12341", 00:18:38.425 "firmware_revision": "8.0.0", 00:18:38.425 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:38.425 "oacs": { 00:18:38.425 "security": 0, 00:18:38.425 "format": 1, 00:18:38.425 "firmware": 0, 00:18:38.425 "ns_manage": 1 00:18:38.425 }, 00:18:38.425 "multi_ctrlr": false, 00:18:38.425 "ana_reporting": false 00:18:38.425 }, 00:18:38.425 "vs": { 00:18:38.425 "nvme_version": "1.4" 00:18:38.425 }, 00:18:38.425 "ns_data": { 00:18:38.425 "id": 1, 00:18:38.425 "can_share": false 00:18:38.425 } 00:18:38.425 } 00:18:38.425 ], 00:18:38.425 "mp_policy": "active_passive" 00:18:38.425 } 00:18:38.425 } 00:18:38.425 ]' 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:38.425 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:38.685 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a66a4167-6a0a-47c5-b251-13b20552afe8 00:18:38.685 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:38.685 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a66a4167-6a0a-47c5-b251-13b20552afe8 00:18:38.944 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:39.203 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=288c100e-7eb8-4a61-ab8b-48450b35bcf9 00:18:39.203 11:02:25 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 288c100e-7eb8-4a61-ab8b-48450b35bcf9 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:39.463 { 00:18:39.463 "name": "88496803-7b47-4c04-864e-e4d27ae8f1e3", 00:18:39.463 "aliases": [ 00:18:39.463 "lvs/nvme0n1p0" 00:18:39.463 ], 00:18:39.463 "product_name": "Logical Volume", 00:18:39.463 "block_size": 4096, 00:18:39.463 "num_blocks": 26476544, 00:18:39.463 "uuid": "88496803-7b47-4c04-864e-e4d27ae8f1e3", 00:18:39.463 "assigned_rate_limits": { 00:18:39.463 "rw_ios_per_sec": 0, 00:18:39.463 "rw_mbytes_per_sec": 0, 00:18:39.463 "r_mbytes_per_sec": 0, 00:18:39.463 "w_mbytes_per_sec": 0 00:18:39.463 }, 00:18:39.463 "claimed": false, 00:18:39.463 "zoned": false, 00:18:39.463 "supported_io_types": { 00:18:39.463 "read": true, 00:18:39.463 "write": true, 00:18:39.463 "unmap": true, 00:18:39.463 "flush": false, 00:18:39.463 "reset": true, 00:18:39.463 "nvme_admin": false, 00:18:39.463 "nvme_io": false, 00:18:39.463 "nvme_io_md": false, 00:18:39.463 "write_zeroes": true, 00:18:39.463 "zcopy": false, 00:18:39.463 "get_zone_info": false, 00:18:39.463 "zone_management": false, 00:18:39.463 "zone_append": false, 00:18:39.463 "compare": false, 00:18:39.463 "compare_and_write": false, 00:18:39.463 "abort": false, 00:18:39.463 "seek_hole": true, 00:18:39.463 "seek_data": true, 00:18:39.463 "copy": false, 00:18:39.463 "nvme_iov_md": false 00:18:39.463 }, 00:18:39.463 "driver_specific": { 00:18:39.463 "lvol": { 00:18:39.463 "lvol_store_uuid": "288c100e-7eb8-4a61-ab8b-48450b35bcf9", 00:18:39.463 "base_bdev": "nvme0n1", 00:18:39.463 "thin_provision": true, 00:18:39.463 "num_allocated_clusters": 0, 00:18:39.463 "snapshot": false, 00:18:39.463 "clone": false, 00:18:39.463 "esnap_clone": false 00:18:39.463 } 00:18:39.463 } 00:18:39.463 } 00:18:39.463 ]' 00:18:39.463 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:39.723 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:39.982 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:40.241 { 00:18:40.241 "name": "88496803-7b47-4c04-864e-e4d27ae8f1e3", 00:18:40.241 "aliases": [ 00:18:40.241 "lvs/nvme0n1p0" 00:18:40.241 ], 00:18:40.241 "product_name": "Logical Volume", 00:18:40.241 "block_size": 4096, 00:18:40.241 "num_blocks": 26476544, 00:18:40.241 "uuid": "88496803-7b47-4c04-864e-e4d27ae8f1e3", 00:18:40.241 "assigned_rate_limits": { 00:18:40.241 "rw_ios_per_sec": 0, 00:18:40.241 "rw_mbytes_per_sec": 0, 00:18:40.241 "r_mbytes_per_sec": 0, 00:18:40.241 "w_mbytes_per_sec": 0 00:18:40.241 }, 00:18:40.241 "claimed": false, 00:18:40.241 "zoned": false, 00:18:40.241 "supported_io_types": { 00:18:40.241 "read": true, 00:18:40.241 "write": true, 00:18:40.241 "unmap": true, 00:18:40.241 "flush": false, 00:18:40.241 "reset": true, 00:18:40.241 "nvme_admin": false, 00:18:40.241 "nvme_io": false, 00:18:40.241 "nvme_io_md": false, 00:18:40.241 "write_zeroes": true, 00:18:40.241 "zcopy": false, 00:18:40.241 "get_zone_info": false, 00:18:40.241 "zone_management": false, 00:18:40.241 "zone_append": false, 00:18:40.241 "compare": false, 00:18:40.241 "compare_and_write": false, 00:18:40.241 "abort": false, 00:18:40.241 "seek_hole": true, 00:18:40.241 "seek_data": true, 00:18:40.241 "copy": false, 00:18:40.241 "nvme_iov_md": false 00:18:40.241 }, 00:18:40.241 "driver_specific": { 00:18:40.241 "lvol": { 00:18:40.241 "lvol_store_uuid": "288c100e-7eb8-4a61-ab8b-48450b35bcf9", 00:18:40.241 "base_bdev": "nvme0n1", 00:18:40.241 "thin_provision": true, 00:18:40.241 "num_allocated_clusters": 0, 00:18:40.241 "snapshot": false, 00:18:40.241 "clone": false, 00:18:40.241 "esnap_clone": false 00:18:40.241 } 00:18:40.241 } 00:18:40.241 } 00:18:40.241 ]' 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:40.241 11:02:26 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88496803-7b47-4c04-864e-e4d27ae8f1e3 00:18:40.501 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:40.501 { 00:18:40.501 "name": "88496803-7b47-4c04-864e-e4d27ae8f1e3", 00:18:40.501 "aliases": [ 00:18:40.501 "lvs/nvme0n1p0" 00:18:40.501 ], 00:18:40.501 "product_name": "Logical Volume", 00:18:40.501 "block_size": 4096, 00:18:40.501 "num_blocks": 26476544, 00:18:40.501 "uuid": "88496803-7b47-4c04-864e-e4d27ae8f1e3", 00:18:40.501 "assigned_rate_limits": { 00:18:40.501 "rw_ios_per_sec": 0, 00:18:40.501 "rw_mbytes_per_sec": 0, 00:18:40.501 "r_mbytes_per_sec": 0, 00:18:40.501 "w_mbytes_per_sec": 0 00:18:40.501 }, 00:18:40.501 "claimed": false, 00:18:40.501 "zoned": false, 00:18:40.501 "supported_io_types": { 00:18:40.501 "read": true, 00:18:40.501 "write": true, 00:18:40.501 "unmap": true, 00:18:40.501 "flush": false, 00:18:40.501 "reset": true, 00:18:40.501 "nvme_admin": false, 00:18:40.501 "nvme_io": false, 00:18:40.501 "nvme_io_md": false, 00:18:40.501 "write_zeroes": true, 00:18:40.501 "zcopy": false, 00:18:40.501 "get_zone_info": false, 00:18:40.501 "zone_management": false, 00:18:40.501 "zone_append": false, 00:18:40.501 "compare": false, 00:18:40.502 "compare_and_write": false, 00:18:40.502 "abort": false, 00:18:40.502 "seek_hole": true, 00:18:40.502 "seek_data": true, 00:18:40.502 "copy": false, 00:18:40.502 "nvme_iov_md": false 00:18:40.502 }, 00:18:40.502 "driver_specific": { 00:18:40.502 "lvol": { 00:18:40.502 "lvol_store_uuid": "288c100e-7eb8-4a61-ab8b-48450b35bcf9", 00:18:40.502 "base_bdev": "nvme0n1", 00:18:40.502 "thin_provision": true, 00:18:40.502 "num_allocated_clusters": 0, 00:18:40.502 "snapshot": false, 00:18:40.502 "clone": false, 00:18:40.502 "esnap_clone": false 00:18:40.502 } 00:18:40.502 } 00:18:40.502 } 00:18:40.502 ]' 00:18:40.502 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:40.763 11:02:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 88496803-7b47-4c04-864e-e4d27ae8f1e3 -c nvc0n1p0 --l2p_dram_limit 20 00:18:40.763 [2024-11-15 11:02:27.586706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.586784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.763 [2024-11-15 11:02:27.586804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.763 [2024-11-15 11:02:27.586819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.586873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.586893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.763 [2024-11-15 11:02:27.586905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:40.763 [2024-11-15 11:02:27.586919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.586939] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.763 [2024-11-15 11:02:27.587915] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.763 [2024-11-15 11:02:27.587939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.587953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.763 [2024-11-15 11:02:27.587965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:18:40.763 [2024-11-15 11:02:27.587979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.588018] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a8043d23-16e2-4318-95da-8bc2a37cbaa1 00:18:40.763 [2024-11-15 11:02:27.590450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.590638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:40.763 [2024-11-15 11:02:27.590667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:40.763 [2024-11-15 11:02:27.590683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.604435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.604612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.763 [2024-11-15 11:02:27.604642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.581 ms 00:18:40.763 [2024-11-15 11:02:27.604654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.604773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.604787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.763 [2024-11-15 11:02:27.604808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:40.763 [2024-11-15 11:02:27.604819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.604882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.604895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.763 [2024-11-15 11:02:27.604910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:40.763 [2024-11-15 11:02:27.604921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.604947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.763 [2024-11-15 11:02:27.611376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.611414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.763 [2024-11-15 11:02:27.611428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.450 ms 00:18:40.763 [2024-11-15 11:02:27.611444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.611483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.611499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.763 [2024-11-15 11:02:27.611510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:40.763 [2024-11-15 11:02:27.611535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.611568] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:40.763 [2024-11-15 11:02:27.611731] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.763 [2024-11-15 11:02:27.611747] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.763 [2024-11-15 11:02:27.611765] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.763 [2024-11-15 11:02:27.611779] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.763 [2024-11-15 11:02:27.611795] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.763 [2024-11-15 11:02:27.611807] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:40.763 [2024-11-15 11:02:27.611822] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.763 [2024-11-15 11:02:27.611833] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.763 [2024-11-15 11:02:27.611846] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.763 [2024-11-15 11:02:27.611858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.611878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.763 [2024-11-15 11:02:27.611889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:40.763 [2024-11-15 11:02:27.611903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.611975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.763 [2024-11-15 11:02:27.611992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.763 [2024-11-15 11:02:27.612003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:40.763 [2024-11-15 11:02:27.612019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.763 [2024-11-15 11:02:27.612098] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.763 [2024-11-15 11:02:27.612115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.763 [2024-11-15 11:02:27.612130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.763 [2024-11-15 11:02:27.612144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.763 [2024-11-15 11:02:27.612155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.763 [2024-11-15 11:02:27.612168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.763 [2024-11-15 11:02:27.612178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:40.763 [2024-11-15 11:02:27.612191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.763 [2024-11-15 11:02:27.612201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.764 [2024-11-15 11:02:27.612225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.764 [2024-11-15 11:02:27.612237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:40.764 [2024-11-15 11:02:27.612247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.764 [2024-11-15 11:02:27.612273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.764 [2024-11-15 11:02:27.612284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:40.764 [2024-11-15 11:02:27.612301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.764 [2024-11-15 11:02:27.612323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.764 [2024-11-15 11:02:27.612359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.764 [2024-11-15 11:02:27.612394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.764 [2024-11-15 11:02:27.612427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.764 [2024-11-15 11:02:27.612463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.764 [2024-11-15 11:02:27.612498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.764 [2024-11-15 11:02:27.612519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.764 [2024-11-15 11:02:27.612546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:40.764 [2024-11-15 11:02:27.612556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.764 [2024-11-15 11:02:27.612569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.764 [2024-11-15 11:02:27.612579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:40.764 [2024-11-15 11:02:27.612592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.764 [2024-11-15 11:02:27.612613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:40.764 [2024-11-15 11:02:27.612623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612636] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.764 [2024-11-15 11:02:27.612647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.764 [2024-11-15 11:02:27.612661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.764 [2024-11-15 11:02:27.612696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.764 [2024-11-15 11:02:27.612706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.764 [2024-11-15 11:02:27.612719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.764 [2024-11-15 11:02:27.612729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.764 [2024-11-15 11:02:27.612742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.764 [2024-11-15 11:02:27.612752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.764 [2024-11-15 11:02:27.612771] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.764 [2024-11-15 11:02:27.612784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.612800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:40.764 [2024-11-15 11:02:27.612811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:40.764 [2024-11-15 11:02:27.612825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:40.764 [2024-11-15 11:02:27.612836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:40.764 [2024-11-15 11:02:27.612850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:40.764 [2024-11-15 11:02:27.612861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:40.764 [2024-11-15 11:02:27.612875] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:40.764 [2024-11-15 11:02:27.612886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:40.764 [2024-11-15 11:02:27.612903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:40.764 [2024-11-15 11:02:27.612914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.612927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.612937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.612950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.612961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:40.764 [2024-11-15 11:02:27.612974] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.764 [2024-11-15 11:02:27.612986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.613002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.764 [2024-11-15 11:02:27.613024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.764 [2024-11-15 11:02:27.613038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.764 [2024-11-15 11:02:27.613050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.764 [2024-11-15 11:02:27.613065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.764 [2024-11-15 11:02:27.613079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.764 [2024-11-15 11:02:27.613093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:18:40.764 [2024-11-15 11:02:27.613104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.764 [2024-11-15 11:02:27.613164] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:40.764 [2024-11-15 11:02:27.613178] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:44.964 [2024-11-15 11:02:30.928985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.964 [2024-11-15 11:02:30.929070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:44.964 [2024-11-15 11:02:30.929102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3321.199 ms 00:18:44.964 [2024-11-15 11:02:30.929114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.964 [2024-11-15 11:02:30.976248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.964 [2024-11-15 11:02:30.976318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.964 [2024-11-15 11:02:30.976343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.847 ms 00:18:44.964 [2024-11-15 11:02:30.976355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.964 [2024-11-15 11:02:30.976522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.964 [2024-11-15 11:02:30.976554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:44.964 [2024-11-15 11:02:30.976575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:44.964 [2024-11-15 11:02:30.976587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.964 [2024-11-15 11:02:31.042955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.964 [2024-11-15 11:02:31.043196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.964 [2024-11-15 11:02:31.043232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.390 ms 00:18:44.964 [2024-11-15 11:02:31.043245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.964 [2024-11-15 11:02:31.043292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.964 [2024-11-15 11:02:31.043309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.965 [2024-11-15 11:02:31.043325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:44.965 [2024-11-15 11:02:31.043336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.044211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.044232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.965 [2024-11-15 11:02:31.044248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:18:44.965 [2024-11-15 11:02:31.044260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.044381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.044406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.965 [2024-11-15 11:02:31.044424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:18:44.965 [2024-11-15 11:02:31.044435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.068242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.068283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.965 [2024-11-15 11:02:31.068302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.822 ms 00:18:44.965 [2024-11-15 11:02:31.068313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.083093] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:44.965 [2024-11-15 11:02:31.092599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.092637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:44.965 [2024-11-15 11:02:31.092652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.231 ms 00:18:44.965 [2024-11-15 11:02:31.092667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.174261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.174469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:44.965 [2024-11-15 11:02:31.174508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.685 ms 00:18:44.965 [2024-11-15 11:02:31.174524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.174730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.174753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:44.965 [2024-11-15 11:02:31.174766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:18:44.965 [2024-11-15 11:02:31.174780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.211271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.211425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:44.965 [2024-11-15 11:02:31.211448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.496 ms 00:18:44.965 [2024-11-15 11:02:31.211463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.248539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.248580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:44.965 [2024-11-15 11:02:31.248596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.017 ms 00:18:44.965 [2024-11-15 11:02:31.248610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.249379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.249404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:44.965 [2024-11-15 11:02:31.249417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:18:44.965 [2024-11-15 11:02:31.249430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.350397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.350587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:44.965 [2024-11-15 11:02:31.350612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.080 ms 00:18:44.965 [2024-11-15 11:02:31.350628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.390831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.390879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:44.965 [2024-11-15 11:02:31.390894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.188 ms 00:18:44.965 [2024-11-15 11:02:31.390914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.427952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.427995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:44.965 [2024-11-15 11:02:31.428010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.055 ms 00:18:44.965 [2024-11-15 11:02:31.428025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.464881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.464924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:44.965 [2024-11-15 11:02:31.464939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.874 ms 00:18:44.965 [2024-11-15 11:02:31.464954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.465001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.465021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:44.965 [2024-11-15 11:02:31.465034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:44.965 [2024-11-15 11:02:31.465047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.465165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.965 [2024-11-15 11:02:31.465183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:44.965 [2024-11-15 11:02:31.465195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:44.965 [2024-11-15 11:02:31.465210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.965 [2024-11-15 11:02:31.466683] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3885.718 ms, result 0 00:18:44.965 { 00:18:44.965 "name": "ftl0", 00:18:44.965 "uuid": "a8043d23-16e2-4318-95da-8bc2a37cbaa1" 00:18:44.965 } 00:18:44.965 11:02:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:44.965 11:02:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:44.965 11:02:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:44.965 11:02:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:44.965 [2024-11-15 11:02:31.802250] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:44.965 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:44.965 Zero copy mechanism will not be used. 00:18:44.965 Running I/O for 4 seconds... 00:18:47.280 1609.00 IOPS, 106.85 MiB/s [2024-11-15T11:02:35.076Z] 1630.50 IOPS, 108.28 MiB/s [2024-11-15T11:02:36.012Z] 1640.67 IOPS, 108.95 MiB/s [2024-11-15T11:02:36.012Z] 1647.50 IOPS, 109.40 MiB/s 00:18:49.151 Latency(us) 00:18:49.151 [2024-11-15T11:02:36.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.151 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:49.151 ftl0 : 4.00 1647.11 109.38 0.00 0.00 636.57 218.78 2145.05 00:18:49.151 [2024-11-15T11:02:36.012Z] =================================================================================================================== 00:18:49.151 [2024-11-15T11:02:36.012Z] Total : 1647.11 109.38 0.00 0.00 636.57 218.78 2145.05 00:18:49.151 { 00:18:49.151 "results": [ 00:18:49.151 { 00:18:49.151 "job": "ftl0", 00:18:49.151 "core_mask": "0x1", 00:18:49.151 "workload": "randwrite", 00:18:49.151 "status": "finished", 00:18:49.151 "queue_depth": 1, 00:18:49.151 "io_size": 69632, 00:18:49.151 "runtime": 4.001548, 00:18:49.151 "iops": 1647.112567436402, 00:18:49.151 "mibps": 109.37856893132357, 00:18:49.151 "io_failed": 0, 00:18:49.151 "io_timeout": 0, 00:18:49.151 "avg_latency_us": 636.568169689835, 00:18:49.151 "min_latency_us": 218.78232931726907, 00:18:49.151 "max_latency_us": 2145.0538152610443 00:18:49.151 } 00:18:49.151 ], 00:18:49.151 "core_count": 1 00:18:49.151 } 00:18:49.151 [2024-11-15 11:02:35.807200] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:49.151 11:02:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:49.152 [2024-11-15 11:02:35.925198] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:49.152 Running I/O for 4 seconds... 00:18:51.462 11644.00 IOPS, 45.48 MiB/s [2024-11-15T11:02:39.259Z] 10805.50 IOPS, 42.21 MiB/s [2024-11-15T11:02:40.196Z] 10763.33 IOPS, 42.04 MiB/s [2024-11-15T11:02:40.196Z] 10690.25 IOPS, 41.76 MiB/s 00:18:53.335 Latency(us) 00:18:53.335 [2024-11-15T11:02:40.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.335 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.335 ftl0 : 4.02 10677.27 41.71 0.00 0.00 11962.13 228.65 31583.61 00:18:53.335 [2024-11-15T11:02:40.196Z] =================================================================================================================== 00:18:53.335 [2024-11-15T11:02:40.196Z] Total : 10677.27 41.71 0.00 0.00 11962.13 0.00 31583.61 00:18:53.335 { 00:18:53.335 "results": [ 00:18:53.335 { 00:18:53.335 "job": "ftl0", 00:18:53.335 "core_mask": "0x1", 00:18:53.335 "workload": "randwrite", 00:18:53.335 "status": "finished", 00:18:53.335 "queue_depth": 128, 00:18:53.335 "io_size": 4096, 00:18:53.335 "runtime": 4.016664, 00:18:53.335 "iops": 10677.26849943137, 00:18:53.335 "mibps": 41.70808007590379, 00:18:53.335 "io_failed": 0, 00:18:53.335 "io_timeout": 0, 00:18:53.335 "avg_latency_us": 11962.125093804461, 00:18:53.335 "min_latency_us": 228.65220883534136, 00:18:53.335 "max_latency_us": 31583.614457831325 00:18:53.335 } 00:18:53.335 ], 00:18:53.335 "core_count": 1 00:18:53.335 } 00:18:53.335 [2024-11-15 11:02:39.947513] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:53.335 11:02:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:53.335 [2024-11-15 11:02:40.074839] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:53.335 Running I/O for 4 seconds... 00:18:55.223 8661.00 IOPS, 33.83 MiB/s [2024-11-15T11:02:43.460Z] 8710.00 IOPS, 34.02 MiB/s [2024-11-15T11:02:44.396Z] 8541.00 IOPS, 33.36 MiB/s [2024-11-15T11:02:44.396Z] 8617.25 IOPS, 33.66 MiB/s 00:18:57.535 Latency(us) 00:18:57.535 [2024-11-15T11:02:44.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.535 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:57.535 Verification LBA range: start 0x0 length 0x1400000 00:18:57.535 ftl0 : 4.01 8627.80 33.70 0.00 0.00 14788.83 268.13 20002.96 00:18:57.535 [2024-11-15T11:02:44.396Z] =================================================================================================================== 00:18:57.535 [2024-11-15T11:02:44.396Z] Total : 8627.80 33.70 0.00 0.00 14788.83 0.00 20002.96 00:18:57.535 [2024-11-15 11:02:44.100617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:18:57.535 "results": [ 00:18:57.535 { 00:18:57.535 "job": "ftl0", 00:18:57.535 "core_mask": "0x1", 00:18:57.535 "workload": "verify", 00:18:57.535 "status": "finished", 00:18:57.535 "verify_range": { 00:18:57.535 "start": 0, 00:18:57.535 "length": 20971520 00:18:57.535 }, 00:18:57.535 "queue_depth": 128, 00:18:57.535 "io_size": 4096, 00:18:57.535 "runtime": 4.009829, 00:18:57.535 "iops": 8627.79934007161, 00:18:57.535 "mibps": 33.702341172154725, 00:18:57.535 "io_failed": 0, 00:18:57.535 "io_timeout": 0, 00:18:57.535 "avg_latency_us": 14788.828223612452, 00:18:57.536 "min_latency_us": 268.13172690763054, 00:18:57.536 "max_latency_us": 20002.955823293174 00:18:57.536 } 00:18:57.536 ], 00:18:57.536 "core_count": 1 00:18:57.536 } 00:18:57.536 l0 00:18:57.536 11:02:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:57.536 [2024-11-15 11:02:44.309945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.536 [2024-11-15 11:02:44.310014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.536 [2024-11-15 11:02:44.310037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:57.536 [2024-11-15 11:02:44.310053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.536 [2024-11-15 11:02:44.310081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.536 [2024-11-15 11:02:44.315050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.536 [2024-11-15 11:02:44.315084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.536 [2024-11-15 11:02:44.315102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:18:57.536 [2024-11-15 11:02:44.315114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.536 [2024-11-15 11:02:44.316954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.536 [2024-11-15 11:02:44.316996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.536 [2024-11-15 11:02:44.317019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:18:57.536 [2024-11-15 11:02:44.317032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.795 [2024-11-15 11:02:44.526714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.795 [2024-11-15 11:02:44.526925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.795 [2024-11-15 11:02:44.526965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 209.986 ms 00:18:57.795 [2024-11-15 11:02:44.526978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.795 [2024-11-15 11:02:44.532450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.795 [2024-11-15 11:02:44.532486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.795 [2024-11-15 11:02:44.532505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.431 ms 00:18:57.795 [2024-11-15 11:02:44.532516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.795 [2024-11-15 11:02:44.573547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.795 [2024-11-15 11:02:44.573715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.795 [2024-11-15 11:02:44.573745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.002 ms 00:18:57.795 [2024-11-15 11:02:44.573758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.795 [2024-11-15 11:02:44.598520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.795 [2024-11-15 11:02:44.598569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.795 [2024-11-15 11:02:44.598594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.723 ms 00:18:57.795 [2024-11-15 11:02:44.598606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.795 [2024-11-15 11:02:44.598773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.795 [2024-11-15 11:02:44.598790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.795 [2024-11-15 11:02:44.598810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:18:57.795 [2024-11-15 11:02:44.598822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.795 [2024-11-15 11:02:44.637922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.795 [2024-11-15 11:02:44.637961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.795 [2024-11-15 11:02:44.637980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.138 ms 00:18:57.795 [2024-11-15 11:02:44.637991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.056 [2024-11-15 11:02:44.676470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.056 [2024-11-15 11:02:44.676508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:58.056 [2024-11-15 11:02:44.676537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.493 ms 00:18:58.056 [2024-11-15 11:02:44.676549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.056 [2024-11-15 11:02:44.715423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.056 [2024-11-15 11:02:44.715459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:58.056 [2024-11-15 11:02:44.715477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.889 ms 00:18:58.056 [2024-11-15 11:02:44.715488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.056 [2024-11-15 11:02:44.755071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.056 [2024-11-15 11:02:44.755219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:58.056 [2024-11-15 11:02:44.755251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.367 ms 00:18:58.056 [2024-11-15 11:02:44.755262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.056 [2024-11-15 11:02:44.755305] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:58.056 [2024-11-15 11:02:44.755325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:58.056 [2024-11-15 11:02:44.755344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:58.056 [2024-11-15 11:02:44.755358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:58.056 [2024-11-15 11:02:44.755373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:58.056 [2024-11-15 11:02:44.755386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:58.056 [2024-11-15 11:02:44.755401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:58.056 [2024-11-15 11:02:44.755413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.755999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:58.057 [2024-11-15 11:02:44.756799] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:58.057 [2024-11-15 11:02:44.756814] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a8043d23-16e2-4318-95da-8bc2a37cbaa1 00:18:58.057 [2024-11-15 11:02:44.756827] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:58.057 [2024-11-15 11:02:44.756842] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:58.057 [2024-11-15 11:02:44.756857] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:58.057 [2024-11-15 11:02:44.756872] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:58.057 [2024-11-15 11:02:44.756883] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:58.057 [2024-11-15 11:02:44.756899] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:58.057 [2024-11-15 11:02:44.756910] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:58.057 [2024-11-15 11:02:44.756927] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:58.057 [2024-11-15 11:02:44.756936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:58.057 [2024-11-15 11:02:44.756951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.057 [2024-11-15 11:02:44.756963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:58.057 [2024-11-15 11:02:44.756979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:18:58.057 [2024-11-15 11:02:44.756990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.057 [2024-11-15 11:02:44.780070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.057 [2024-11-15 11:02:44.780105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:58.057 [2024-11-15 11:02:44.780122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.060 ms 00:18:58.057 [2024-11-15 11:02:44.780133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.057 [2024-11-15 11:02:44.780849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.057 [2024-11-15 11:02:44.780874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:58.057 [2024-11-15 11:02:44.780891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:18:58.057 [2024-11-15 11:02:44.780903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.057 [2024-11-15 11:02:44.846366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.057 [2024-11-15 11:02:44.846538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.057 [2024-11-15 11:02:44.846573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.057 [2024-11-15 11:02:44.846586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.057 [2024-11-15 11:02:44.846661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.057 [2024-11-15 11:02:44.846674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.057 [2024-11-15 11:02:44.846688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.057 [2024-11-15 11:02:44.846699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.057 [2024-11-15 11:02:44.846845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.057 [2024-11-15 11:02:44.846866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.057 [2024-11-15 11:02:44.846881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.057 [2024-11-15 11:02:44.846892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.057 [2024-11-15 11:02:44.846917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.057 [2024-11-15 11:02:44.846928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.057 [2024-11-15 11:02:44.846945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.057 [2024-11-15 11:02:44.846956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:44.994555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:44.994638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.320 [2024-11-15 11:02:44.994666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:44.994678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.112542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.112611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.320 [2024-11-15 11:02:45.112633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.112647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.112818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.112835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.320 [2024-11-15 11:02:45.112858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.112870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.112943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.112957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.320 [2024-11-15 11:02:45.112975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.112987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.113138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.113154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.320 [2024-11-15 11:02:45.113179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.113191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.113238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.113253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:58.320 [2024-11-15 11:02:45.113268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.113279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.113334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.113347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.320 [2024-11-15 11:02:45.113363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.113378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.113436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.320 [2024-11-15 11:02:45.113461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.320 [2024-11-15 11:02:45.113477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.320 [2024-11-15 11:02:45.113488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.320 [2024-11-15 11:02:45.113708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 804.998 ms, result 0 00:18:58.320 true 00:18:58.320 11:02:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75046 00:18:58.320 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75046 ']' 00:18:58.320 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75046 00:18:58.320 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:18:58.320 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.320 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75046 00:18:58.579 killing process with pid 75046 00:18:58.579 Received shutdown signal, test time was about 4.000000 seconds 00:18:58.579 00:18:58.579 Latency(us) 00:18:58.579 [2024-11-15T11:02:45.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.579 [2024-11-15T11:02:45.440Z] =================================================================================================================== 00:18:58.579 [2024-11-15T11:02:45.440Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.579 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.579 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.579 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75046' 00:18:58.579 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75046 00:18:58.579 11:02:45 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75046 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:02.770 Remove shared memory files 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:02.770 ************************************ 00:19:02.770 END TEST ftl_bdevperf 00:19:02.770 ************************************ 00:19:02.770 00:19:02.770 real 0m25.470s 00:19:02.770 user 0m27.877s 00:19:02.770 sys 0m1.400s 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.770 11:02:48 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:02.770 11:02:48 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:02.770 11:02:48 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:02.770 11:02:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.770 11:02:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:02.770 ************************************ 00:19:02.770 START TEST ftl_trim 00:19:02.770 ************************************ 00:19:02.770 11:02:48 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:02.770 * Looking for test storage... 00:19:02.770 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.770 11:02:49 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:02.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.770 --rc genhtml_branch_coverage=1 00:19:02.770 --rc genhtml_function_coverage=1 00:19:02.770 --rc genhtml_legend=1 00:19:02.770 --rc geninfo_all_blocks=1 00:19:02.770 --rc geninfo_unexecuted_blocks=1 00:19:02.770 00:19:02.770 ' 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:02.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.770 --rc genhtml_branch_coverage=1 00:19:02.770 --rc genhtml_function_coverage=1 00:19:02.770 --rc genhtml_legend=1 00:19:02.770 --rc geninfo_all_blocks=1 00:19:02.770 --rc geninfo_unexecuted_blocks=1 00:19:02.770 00:19:02.770 ' 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:02.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.770 --rc genhtml_branch_coverage=1 00:19:02.770 --rc genhtml_function_coverage=1 00:19:02.770 --rc genhtml_legend=1 00:19:02.770 --rc geninfo_all_blocks=1 00:19:02.770 --rc geninfo_unexecuted_blocks=1 00:19:02.770 00:19:02.770 ' 00:19:02.770 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:02.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.770 --rc genhtml_branch_coverage=1 00:19:02.770 --rc genhtml_function_coverage=1 00:19:02.770 --rc genhtml_legend=1 00:19:02.770 --rc geninfo_all_blocks=1 00:19:02.770 --rc geninfo_unexecuted_blocks=1 00:19:02.770 00:19:02.770 ' 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:02.770 11:02:49 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:02.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75409 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75409 00:19:02.771 11:02:49 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:02.771 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 75409 ']' 00:19:02.771 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.771 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.771 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.771 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.771 11:02:49 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:02.771 [2024-11-15 11:02:49.308823] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:02.771 [2024-11-15 11:02:49.309158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75409 ] 00:19:02.771 [2024-11-15 11:02:49.491966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:02.771 [2024-11-15 11:02:49.608814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.771 [2024-11-15 11:02:49.608939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.771 [2024-11-15 11:02:49.608973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.708 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.708 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:03.708 11:02:50 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:03.708 11:02:50 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:03.708 11:02:50 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:03.708 11:02:50 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:03.708 11:02:50 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:03.708 11:02:50 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:03.967 11:02:50 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:03.967 11:02:50 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:03.967 11:02:50 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:03.967 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:03.967 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:03.967 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:03.967 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:03.967 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:04.226 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.226 { 00:19:04.226 "name": "nvme0n1", 00:19:04.226 "aliases": [ 00:19:04.226 "6656fefd-c358-4545-90e6-1d3b07585f8f" 00:19:04.226 ], 00:19:04.226 "product_name": "NVMe disk", 00:19:04.226 "block_size": 4096, 00:19:04.226 "num_blocks": 1310720, 00:19:04.226 "uuid": "6656fefd-c358-4545-90e6-1d3b07585f8f", 00:19:04.226 "numa_id": -1, 00:19:04.226 "assigned_rate_limits": { 00:19:04.226 "rw_ios_per_sec": 0, 00:19:04.226 "rw_mbytes_per_sec": 0, 00:19:04.226 "r_mbytes_per_sec": 0, 00:19:04.226 "w_mbytes_per_sec": 0 00:19:04.226 }, 00:19:04.226 "claimed": true, 00:19:04.226 "claim_type": "read_many_write_one", 00:19:04.226 "zoned": false, 00:19:04.226 "supported_io_types": { 00:19:04.226 "read": true, 00:19:04.226 "write": true, 00:19:04.226 "unmap": true, 00:19:04.226 "flush": true, 00:19:04.226 "reset": true, 00:19:04.226 "nvme_admin": true, 00:19:04.226 "nvme_io": true, 00:19:04.226 "nvme_io_md": false, 00:19:04.226 "write_zeroes": true, 00:19:04.226 "zcopy": false, 00:19:04.226 "get_zone_info": false, 00:19:04.226 "zone_management": false, 00:19:04.226 "zone_append": false, 00:19:04.226 "compare": true, 00:19:04.226 "compare_and_write": false, 00:19:04.226 "abort": true, 00:19:04.226 "seek_hole": false, 00:19:04.226 "seek_data": false, 00:19:04.226 "copy": true, 00:19:04.226 "nvme_iov_md": false 00:19:04.226 }, 00:19:04.226 "driver_specific": { 00:19:04.226 "nvme": [ 00:19:04.226 { 00:19:04.226 "pci_address": "0000:00:11.0", 00:19:04.226 "trid": { 00:19:04.226 "trtype": "PCIe", 00:19:04.226 "traddr": "0000:00:11.0" 00:19:04.226 }, 00:19:04.226 "ctrlr_data": { 00:19:04.226 "cntlid": 0, 00:19:04.226 "vendor_id": "0x1b36", 00:19:04.226 "model_number": "QEMU NVMe Ctrl", 00:19:04.226 "serial_number": "12341", 00:19:04.226 "firmware_revision": "8.0.0", 00:19:04.226 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:04.226 "oacs": { 00:19:04.226 "security": 0, 00:19:04.226 "format": 1, 00:19:04.226 "firmware": 0, 00:19:04.226 "ns_manage": 1 00:19:04.226 }, 00:19:04.226 "multi_ctrlr": false, 00:19:04.226 "ana_reporting": false 00:19:04.226 }, 00:19:04.226 "vs": { 00:19:04.226 "nvme_version": "1.4" 00:19:04.226 }, 00:19:04.226 "ns_data": { 00:19:04.226 "id": 1, 00:19:04.226 "can_share": false 00:19:04.226 } 00:19:04.226 } 00:19:04.226 ], 00:19:04.226 "mp_policy": "active_passive" 00:19:04.226 } 00:19:04.226 } 00:19:04.226 ]' 00:19:04.226 11:02:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.226 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.226 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.485 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:04.485 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:04.485 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:04.485 11:02:51 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=288c100e-7eb8-4a61-ab8b-48450b35bcf9 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:04.486 11:02:51 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 288c100e-7eb8-4a61-ab8b-48450b35bcf9 00:19:04.744 11:02:51 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:05.003 11:02:51 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d7191a5d-b430-4199-960f-58a227da16ce 00:19:05.003 11:02:51 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d7191a5d-b430-4199-960f-58a227da16ce 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:05.262 11:02:51 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.262 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.262 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.262 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:05.262 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:05.262 11:02:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.522 { 00:19:05.522 "name": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:05.522 "aliases": [ 00:19:05.522 "lvs/nvme0n1p0" 00:19:05.522 ], 00:19:05.522 "product_name": "Logical Volume", 00:19:05.522 "block_size": 4096, 00:19:05.522 "num_blocks": 26476544, 00:19:05.522 "uuid": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:05.522 "assigned_rate_limits": { 00:19:05.522 "rw_ios_per_sec": 0, 00:19:05.522 "rw_mbytes_per_sec": 0, 00:19:05.522 "r_mbytes_per_sec": 0, 00:19:05.522 "w_mbytes_per_sec": 0 00:19:05.522 }, 00:19:05.522 "claimed": false, 00:19:05.522 "zoned": false, 00:19:05.522 "supported_io_types": { 00:19:05.522 "read": true, 00:19:05.522 "write": true, 00:19:05.522 "unmap": true, 00:19:05.522 "flush": false, 00:19:05.522 "reset": true, 00:19:05.522 "nvme_admin": false, 00:19:05.522 "nvme_io": false, 00:19:05.522 "nvme_io_md": false, 00:19:05.522 "write_zeroes": true, 00:19:05.522 "zcopy": false, 00:19:05.522 "get_zone_info": false, 00:19:05.522 "zone_management": false, 00:19:05.522 "zone_append": false, 00:19:05.522 "compare": false, 00:19:05.522 "compare_and_write": false, 00:19:05.522 "abort": false, 00:19:05.522 "seek_hole": true, 00:19:05.522 "seek_data": true, 00:19:05.522 "copy": false, 00:19:05.522 "nvme_iov_md": false 00:19:05.522 }, 00:19:05.522 "driver_specific": { 00:19:05.522 "lvol": { 00:19:05.522 "lvol_store_uuid": "d7191a5d-b430-4199-960f-58a227da16ce", 00:19:05.522 "base_bdev": "nvme0n1", 00:19:05.522 "thin_provision": true, 00:19:05.522 "num_allocated_clusters": 0, 00:19:05.522 "snapshot": false, 00:19:05.522 "clone": false, 00:19:05.522 "esnap_clone": false 00:19:05.522 } 00:19:05.522 } 00:19:05.522 } 00:19:05.522 ]' 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:05.522 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:05.522 11:02:52 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:05.522 11:02:52 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:05.522 11:02:52 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:05.782 11:02:52 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:05.782 11:02:52 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:05.782 11:02:52 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.782 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:05.782 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:05.782 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:05.782 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:05.782 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:06.041 { 00:19:06.041 "name": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:06.041 "aliases": [ 00:19:06.041 "lvs/nvme0n1p0" 00:19:06.041 ], 00:19:06.041 "product_name": "Logical Volume", 00:19:06.041 "block_size": 4096, 00:19:06.041 "num_blocks": 26476544, 00:19:06.041 "uuid": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:06.041 "assigned_rate_limits": { 00:19:06.041 "rw_ios_per_sec": 0, 00:19:06.041 "rw_mbytes_per_sec": 0, 00:19:06.041 "r_mbytes_per_sec": 0, 00:19:06.041 "w_mbytes_per_sec": 0 00:19:06.041 }, 00:19:06.041 "claimed": false, 00:19:06.041 "zoned": false, 00:19:06.041 "supported_io_types": { 00:19:06.041 "read": true, 00:19:06.041 "write": true, 00:19:06.041 "unmap": true, 00:19:06.041 "flush": false, 00:19:06.041 "reset": true, 00:19:06.041 "nvme_admin": false, 00:19:06.041 "nvme_io": false, 00:19:06.041 "nvme_io_md": false, 00:19:06.041 "write_zeroes": true, 00:19:06.041 "zcopy": false, 00:19:06.041 "get_zone_info": false, 00:19:06.041 "zone_management": false, 00:19:06.041 "zone_append": false, 00:19:06.041 "compare": false, 00:19:06.041 "compare_and_write": false, 00:19:06.041 "abort": false, 00:19:06.041 "seek_hole": true, 00:19:06.041 "seek_data": true, 00:19:06.041 "copy": false, 00:19:06.041 "nvme_iov_md": false 00:19:06.041 }, 00:19:06.041 "driver_specific": { 00:19:06.041 "lvol": { 00:19:06.041 "lvol_store_uuid": "d7191a5d-b430-4199-960f-58a227da16ce", 00:19:06.041 "base_bdev": "nvme0n1", 00:19:06.041 "thin_provision": true, 00:19:06.041 "num_allocated_clusters": 0, 00:19:06.041 "snapshot": false, 00:19:06.041 "clone": false, 00:19:06.041 "esnap_clone": false 00:19:06.041 } 00:19:06.041 } 00:19:06.041 } 00:19:06.041 ]' 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:06.041 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:06.041 11:02:52 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:06.041 11:02:52 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:06.344 11:02:52 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:06.344 11:02:52 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:06.344 11:02:52 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:06.344 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:06.344 11:02:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:06.344 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:06.344 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:06.344 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ea5b6f9-a965-4533-90bf-fd5187c65809 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:06.614 { 00:19:06.614 "name": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:06.614 "aliases": [ 00:19:06.614 "lvs/nvme0n1p0" 00:19:06.614 ], 00:19:06.614 "product_name": "Logical Volume", 00:19:06.614 "block_size": 4096, 00:19:06.614 "num_blocks": 26476544, 00:19:06.614 "uuid": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:06.614 "assigned_rate_limits": { 00:19:06.614 "rw_ios_per_sec": 0, 00:19:06.614 "rw_mbytes_per_sec": 0, 00:19:06.614 "r_mbytes_per_sec": 0, 00:19:06.614 "w_mbytes_per_sec": 0 00:19:06.614 }, 00:19:06.614 "claimed": false, 00:19:06.614 "zoned": false, 00:19:06.614 "supported_io_types": { 00:19:06.614 "read": true, 00:19:06.614 "write": true, 00:19:06.614 "unmap": true, 00:19:06.614 "flush": false, 00:19:06.614 "reset": true, 00:19:06.614 "nvme_admin": false, 00:19:06.614 "nvme_io": false, 00:19:06.614 "nvme_io_md": false, 00:19:06.614 "write_zeroes": true, 00:19:06.614 "zcopy": false, 00:19:06.614 "get_zone_info": false, 00:19:06.614 "zone_management": false, 00:19:06.614 "zone_append": false, 00:19:06.614 "compare": false, 00:19:06.614 "compare_and_write": false, 00:19:06.614 "abort": false, 00:19:06.614 "seek_hole": true, 00:19:06.614 "seek_data": true, 00:19:06.614 "copy": false, 00:19:06.614 "nvme_iov_md": false 00:19:06.614 }, 00:19:06.614 "driver_specific": { 00:19:06.614 "lvol": { 00:19:06.614 "lvol_store_uuid": "d7191a5d-b430-4199-960f-58a227da16ce", 00:19:06.614 "base_bdev": "nvme0n1", 00:19:06.614 "thin_provision": true, 00:19:06.614 "num_allocated_clusters": 0, 00:19:06.614 "snapshot": false, 00:19:06.614 "clone": false, 00:19:06.614 "esnap_clone": false 00:19:06.614 } 00:19:06.614 } 00:19:06.614 } 00:19:06.614 ]' 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:06.614 11:02:53 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:06.614 11:02:53 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:06.614 11:02:53 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6ea5b6f9-a965-4533-90bf-fd5187c65809 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:06.874 [2024-11-15 11:02:53.487216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.487270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:06.874 [2024-11-15 11:02:53.487292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:06.874 [2024-11-15 11:02:53.487304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.490666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.490707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:06.874 [2024-11-15 11:02:53.490723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:19:06.874 [2024-11-15 11:02:53.490733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.490865] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:06.874 [2024-11-15 11:02:53.491809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:06.874 [2024-11-15 11:02:53.491847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.491859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:06.874 [2024-11-15 11:02:53.491873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:19:06.874 [2024-11-15 11:02:53.491883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.491998] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:06.874 [2024-11-15 11:02:53.493416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.493455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:06.874 [2024-11-15 11:02:53.493468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:19:06.874 [2024-11-15 11:02:53.493481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.501022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.501207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:06.874 [2024-11-15 11:02:53.501231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.451 ms 00:19:06.874 [2024-11-15 11:02:53.501247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.501420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.501439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:06.874 [2024-11-15 11:02:53.501451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:06.874 [2024-11-15 11:02:53.501468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.501510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.501548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:06.874 [2024-11-15 11:02:53.501560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:06.874 [2024-11-15 11:02:53.501573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.874 [2024-11-15 11:02:53.501612] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:06.874 [2024-11-15 11:02:53.506623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.874 [2024-11-15 11:02:53.506654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:06.874 [2024-11-15 11:02:53.506673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:19:06.875 [2024-11-15 11:02:53.506684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.875 [2024-11-15 11:02:53.506752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.875 [2024-11-15 11:02:53.506764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:06.875 [2024-11-15 11:02:53.506777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:06.875 [2024-11-15 11:02:53.506805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.875 [2024-11-15 11:02:53.506840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:06.875 [2024-11-15 11:02:53.506966] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:06.875 [2024-11-15 11:02:53.506986] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:06.875 [2024-11-15 11:02:53.507000] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:06.875 [2024-11-15 11:02:53.507016] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507029] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507043] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:06.875 [2024-11-15 11:02:53.507053] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:06.875 [2024-11-15 11:02:53.507065] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:06.875 [2024-11-15 11:02:53.507078] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:06.875 [2024-11-15 11:02:53.507091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.875 [2024-11-15 11:02:53.507101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:06.875 [2024-11-15 11:02:53.507114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:19:06.875 [2024-11-15 11:02:53.507125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.875 [2024-11-15 11:02:53.507217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.875 [2024-11-15 11:02:53.507228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:06.875 [2024-11-15 11:02:53.507241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:06.875 [2024-11-15 11:02:53.507251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.875 [2024-11-15 11:02:53.507369] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:06.875 [2024-11-15 11:02:53.507381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:06.875 [2024-11-15 11:02:53.507394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:06.875 [2024-11-15 11:02:53.507428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:06.875 [2024-11-15 11:02:53.507462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.875 [2024-11-15 11:02:53.507484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:06.875 [2024-11-15 11:02:53.507494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:06.875 [2024-11-15 11:02:53.507506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:06.875 [2024-11-15 11:02:53.507516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:06.875 [2024-11-15 11:02:53.507547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:06.875 [2024-11-15 11:02:53.507557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:06.875 [2024-11-15 11:02:53.507581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:06.875 [2024-11-15 11:02:53.507617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:06.875 [2024-11-15 11:02:53.507647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:06.875 [2024-11-15 11:02:53.507700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:06.875 [2024-11-15 11:02:53.507730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:06.875 [2024-11-15 11:02:53.507765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.875 [2024-11-15 11:02:53.507786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:06.875 [2024-11-15 11:02:53.507795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:06.875 [2024-11-15 11:02:53.507807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:06.875 [2024-11-15 11:02:53.507816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:06.875 [2024-11-15 11:02:53.507827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:06.875 [2024-11-15 11:02:53.507839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:06.875 [2024-11-15 11:02:53.507860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:06.875 [2024-11-15 11:02:53.507872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507881] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:06.875 [2024-11-15 11:02:53.507893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:06.875 [2024-11-15 11:02:53.507904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:06.875 [2024-11-15 11:02:53.507916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:06.875 [2024-11-15 11:02:53.507927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:06.875 [2024-11-15 11:02:53.507943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:06.875 [2024-11-15 11:02:53.507952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:06.875 [2024-11-15 11:02:53.507964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:06.875 [2024-11-15 11:02:53.507974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:06.875 [2024-11-15 11:02:53.508000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:06.875 [2024-11-15 11:02:53.508014] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:06.875 [2024-11-15 11:02:53.508029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:06.875 [2024-11-15 11:02:53.508054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:06.875 [2024-11-15 11:02:53.508064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:06.875 [2024-11-15 11:02:53.508077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:06.875 [2024-11-15 11:02:53.508088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:06.875 [2024-11-15 11:02:53.508100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:06.875 [2024-11-15 11:02:53.508111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:06.875 [2024-11-15 11:02:53.508123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:06.875 [2024-11-15 11:02:53.508134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:06.875 [2024-11-15 11:02:53.508149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:06.875 [2024-11-15 11:02:53.508205] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:06.875 [2024-11-15 11:02:53.508227] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508238] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:06.875 [2024-11-15 11:02:53.508251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:06.875 [2024-11-15 11:02:53.508261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:06.875 [2024-11-15 11:02:53.508274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:06.875 [2024-11-15 11:02:53.508286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:06.876 [2024-11-15 11:02:53.508300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:06.876 [2024-11-15 11:02:53.508310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:19:06.876 [2024-11-15 11:02:53.508323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:06.876 [2024-11-15 11:02:53.508411] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:06.876 [2024-11-15 11:02:53.508428] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:10.165 [2024-11-15 11:02:56.829129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.829208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:10.165 [2024-11-15 11:02:56.829226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3326.106 ms 00:19:10.165 [2024-11-15 11:02:56.829294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.868564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.868621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.165 [2024-11-15 11:02:56.868639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.927 ms 00:19:10.165 [2024-11-15 11:02:56.868653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.868866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.868882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:10.165 [2024-11-15 11:02:56.868894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:10.165 [2024-11-15 11:02:56.868910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.926027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.926088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.165 [2024-11-15 11:02:56.926109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.104 ms 00:19:10.165 [2024-11-15 11:02:56.926130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.926277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.926298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.165 [2024-11-15 11:02:56.926313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.165 [2024-11-15 11:02:56.926330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.926903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.926929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.165 [2024-11-15 11:02:56.926945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.480 ms 00:19:10.165 [2024-11-15 11:02:56.926961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.927194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.927221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.165 [2024-11-15 11:02:56.927237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:19:10.165 [2024-11-15 11:02:56.927258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.949206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.949409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.165 [2024-11-15 11:02:56.949434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.860 ms 00:19:10.165 [2024-11-15 11:02:56.949449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.165 [2024-11-15 11:02:56.962111] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:10.165 [2024-11-15 11:02:56.978865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.165 [2024-11-15 11:02:56.978916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:10.165 [2024-11-15 11:02:56.978936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.198 ms 00:19:10.165 [2024-11-15 11:02:56.978947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.425 [2024-11-15 11:02:57.070424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.425 [2024-11-15 11:02:57.070682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:10.425 [2024-11-15 11:02:57.070724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.437 ms 00:19:10.425 [2024-11-15 11:02:57.070736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.425 [2024-11-15 11:02:57.071027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.425 [2024-11-15 11:02:57.071042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:10.425 [2024-11-15 11:02:57.071060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:19:10.425 [2024-11-15 11:02:57.071070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.425 [2024-11-15 11:02:57.109744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.425 [2024-11-15 11:02:57.109903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:10.425 [2024-11-15 11:02:57.109933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.654 ms 00:19:10.425 [2024-11-15 11:02:57.109945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.425 [2024-11-15 11:02:57.147579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.425 [2024-11-15 11:02:57.147621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:10.425 [2024-11-15 11:02:57.147640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.502 ms 00:19:10.425 [2024-11-15 11:02:57.147650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.425 [2024-11-15 11:02:57.148412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.425 [2024-11-15 11:02:57.148438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:10.425 [2024-11-15 11:02:57.148453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:19:10.425 [2024-11-15 11:02:57.148464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.425 [2024-11-15 11:02:57.260109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.425 [2024-11-15 11:02:57.260161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:10.425 [2024-11-15 11:02:57.260188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.700 ms 00:19:10.425 [2024-11-15 11:02:57.260199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.684 [2024-11-15 11:02:57.298841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.684 [2024-11-15 11:02:57.298885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:10.684 [2024-11-15 11:02:57.298903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.491 ms 00:19:10.684 [2024-11-15 11:02:57.298914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.684 [2024-11-15 11:02:57.336152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.684 [2024-11-15 11:02:57.336192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:10.684 [2024-11-15 11:02:57.336210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.163 ms 00:19:10.684 [2024-11-15 11:02:57.336221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.684 [2024-11-15 11:02:57.373127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.684 [2024-11-15 11:02:57.373168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:10.684 [2024-11-15 11:02:57.373185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.842 ms 00:19:10.684 [2024-11-15 11:02:57.373214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.684 [2024-11-15 11:02:57.373369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.684 [2024-11-15 11:02:57.373386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:10.684 [2024-11-15 11:02:57.373403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:10.684 [2024-11-15 11:02:57.373414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.684 [2024-11-15 11:02:57.373563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.684 [2024-11-15 11:02:57.373577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:10.684 [2024-11-15 11:02:57.373592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:10.684 [2024-11-15 11:02:57.373609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.684 [2024-11-15 11:02:57.374997] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.684 [2024-11-15 11:02:57.379231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3893.603 ms, result 0 00:19:10.684 [2024-11-15 11:02:57.380474] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.684 { 00:19:10.684 "name": "ftl0", 00:19:10.684 "uuid": "7f41f37e-4d8b-4639-af11-0cd684c222f0" 00:19:10.684 } 00:19:10.684 11:02:57 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:10.684 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:10.684 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:10.684 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:10.684 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:10.684 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:10.684 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:10.943 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:11.201 [ 00:19:11.201 { 00:19:11.201 "name": "ftl0", 00:19:11.201 "aliases": [ 00:19:11.201 "7f41f37e-4d8b-4639-af11-0cd684c222f0" 00:19:11.201 ], 00:19:11.201 "product_name": "FTL disk", 00:19:11.201 "block_size": 4096, 00:19:11.201 "num_blocks": 23592960, 00:19:11.201 "uuid": "7f41f37e-4d8b-4639-af11-0cd684c222f0", 00:19:11.201 "assigned_rate_limits": { 00:19:11.201 "rw_ios_per_sec": 0, 00:19:11.201 "rw_mbytes_per_sec": 0, 00:19:11.201 "r_mbytes_per_sec": 0, 00:19:11.201 "w_mbytes_per_sec": 0 00:19:11.201 }, 00:19:11.201 "claimed": false, 00:19:11.201 "zoned": false, 00:19:11.201 "supported_io_types": { 00:19:11.201 "read": true, 00:19:11.201 "write": true, 00:19:11.201 "unmap": true, 00:19:11.201 "flush": true, 00:19:11.201 "reset": false, 00:19:11.201 "nvme_admin": false, 00:19:11.201 "nvme_io": false, 00:19:11.201 "nvme_io_md": false, 00:19:11.201 "write_zeroes": true, 00:19:11.201 "zcopy": false, 00:19:11.201 "get_zone_info": false, 00:19:11.201 "zone_management": false, 00:19:11.201 "zone_append": false, 00:19:11.201 "compare": false, 00:19:11.201 "compare_and_write": false, 00:19:11.201 "abort": false, 00:19:11.201 "seek_hole": false, 00:19:11.201 "seek_data": false, 00:19:11.201 "copy": false, 00:19:11.201 "nvme_iov_md": false 00:19:11.201 }, 00:19:11.201 "driver_specific": { 00:19:11.201 "ftl": { 00:19:11.201 "base_bdev": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:11.201 "cache": "nvc0n1p0" 00:19:11.201 } 00:19:11.201 } 00:19:11.201 } 00:19:11.201 ] 00:19:11.201 11:02:57 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:11.201 11:02:57 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:11.201 11:02:57 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:11.201 11:02:58 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:11.201 11:02:58 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:11.460 11:02:58 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:11.460 { 00:19:11.460 "name": "ftl0", 00:19:11.460 "aliases": [ 00:19:11.460 "7f41f37e-4d8b-4639-af11-0cd684c222f0" 00:19:11.460 ], 00:19:11.460 "product_name": "FTL disk", 00:19:11.460 "block_size": 4096, 00:19:11.460 "num_blocks": 23592960, 00:19:11.460 "uuid": "7f41f37e-4d8b-4639-af11-0cd684c222f0", 00:19:11.460 "assigned_rate_limits": { 00:19:11.460 "rw_ios_per_sec": 0, 00:19:11.460 "rw_mbytes_per_sec": 0, 00:19:11.460 "r_mbytes_per_sec": 0, 00:19:11.460 "w_mbytes_per_sec": 0 00:19:11.460 }, 00:19:11.460 "claimed": false, 00:19:11.460 "zoned": false, 00:19:11.460 "supported_io_types": { 00:19:11.460 "read": true, 00:19:11.460 "write": true, 00:19:11.460 "unmap": true, 00:19:11.460 "flush": true, 00:19:11.460 "reset": false, 00:19:11.460 "nvme_admin": false, 00:19:11.460 "nvme_io": false, 00:19:11.460 "nvme_io_md": false, 00:19:11.460 "write_zeroes": true, 00:19:11.460 "zcopy": false, 00:19:11.460 "get_zone_info": false, 00:19:11.460 "zone_management": false, 00:19:11.460 "zone_append": false, 00:19:11.460 "compare": false, 00:19:11.460 "compare_and_write": false, 00:19:11.460 "abort": false, 00:19:11.460 "seek_hole": false, 00:19:11.460 "seek_data": false, 00:19:11.460 "copy": false, 00:19:11.460 "nvme_iov_md": false 00:19:11.460 }, 00:19:11.460 "driver_specific": { 00:19:11.460 "ftl": { 00:19:11.460 "base_bdev": "6ea5b6f9-a965-4533-90bf-fd5187c65809", 00:19:11.460 "cache": "nvc0n1p0" 00:19:11.460 } 00:19:11.460 } 00:19:11.460 } 00:19:11.460 ]' 00:19:11.460 11:02:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:11.460 11:02:58 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:11.460 11:02:58 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:11.719 [2024-11-15 11:02:58.462182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.462240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:11.719 [2024-11-15 11:02:58.462260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:11.719 [2024-11-15 11:02:58.462276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.462363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:11.719 [2024-11-15 11:02:58.466769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.466804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:11.719 [2024-11-15 11:02:58.466824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.381 ms 00:19:11.719 [2024-11-15 11:02:58.466835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.467885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.467912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:11.719 [2024-11-15 11:02:58.467927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:19:11.719 [2024-11-15 11:02:58.467938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.470800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.470826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:11.719 [2024-11-15 11:02:58.470840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:19:11.719 [2024-11-15 11:02:58.470851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.476525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.476565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:11.719 [2024-11-15 11:02:58.476580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.591 ms 00:19:11.719 [2024-11-15 11:02:58.476606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.513279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.513319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:11.719 [2024-11-15 11:02:58.513356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.583 ms 00:19:11.719 [2024-11-15 11:02:58.513367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.534976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.535013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:11.719 [2024-11-15 11:02:58.535030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.497 ms 00:19:11.719 [2024-11-15 11:02:58.535060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.535450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.535464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:11.719 [2024-11-15 11:02:58.535478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:19:11.719 [2024-11-15 11:02:58.535488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.719 [2024-11-15 11:02:58.572067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.719 [2024-11-15 11:02:58.572225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:11.719 [2024-11-15 11:02:58.572269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.552 ms 00:19:11.719 [2024-11-15 11:02:58.572280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.978 [2024-11-15 11:02:58.607968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.978 [2024-11-15 11:02:58.608002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:11.978 [2024-11-15 11:02:58.608021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.535 ms 00:19:11.978 [2024-11-15 11:02:58.608047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.978 [2024-11-15 11:02:58.643165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.978 [2024-11-15 11:02:58.643335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:11.978 [2024-11-15 11:02:58.643362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.058 ms 00:19:11.978 [2024-11-15 11:02:58.643372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.978 [2024-11-15 11:02:58.679067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.978 [2024-11-15 11:02:58.679103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:11.978 [2024-11-15 11:02:58.679120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.426 ms 00:19:11.978 [2024-11-15 11:02:58.679130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.978 [2024-11-15 11:02:58.679256] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:11.978 [2024-11-15 11:02:58.679274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.679999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:11.978 [2024-11-15 11:02:58.680082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:11.979 [2024-11-15 11:02:58.680582] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:11.979 [2024-11-15 11:02:58.680597] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:11.979 [2024-11-15 11:02:58.680609] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:11.979 [2024-11-15 11:02:58.680622] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:11.979 [2024-11-15 11:02:58.680632] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:11.979 [2024-11-15 11:02:58.680645] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:11.979 [2024-11-15 11:02:58.680659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:11.979 [2024-11-15 11:02:58.680671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:11.979 [2024-11-15 11:02:58.680682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:11.979 [2024-11-15 11:02:58.680694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:11.979 [2024-11-15 11:02:58.680702] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:11.979 [2024-11-15 11:02:58.680717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.979 [2024-11-15 11:02:58.680727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:11.979 [2024-11-15 11:02:58.680741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:19:11.979 [2024-11-15 11:02:58.680751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.979 [2024-11-15 11:02:58.700728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.979 [2024-11-15 11:02:58.700763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:11.979 [2024-11-15 11:02:58.700785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.943 ms 00:19:11.979 [2024-11-15 11:02:58.700795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.979 [2024-11-15 11:02:58.701385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.979 [2024-11-15 11:02:58.701402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:11.979 [2024-11-15 11:02:58.701416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:19:11.979 [2024-11-15 11:02:58.701426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.979 [2024-11-15 11:02:58.771691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.979 [2024-11-15 11:02:58.771735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.979 [2024-11-15 11:02:58.771751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.979 [2024-11-15 11:02:58.771778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.979 [2024-11-15 11:02:58.771980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.979 [2024-11-15 11:02:58.771993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.979 [2024-11-15 11:02:58.772007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.979 [2024-11-15 11:02:58.772017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.979 [2024-11-15 11:02:58.772131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.979 [2024-11-15 11:02:58.772144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.979 [2024-11-15 11:02:58.772164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.979 [2024-11-15 11:02:58.772174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.979 [2024-11-15 11:02:58.772229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.979 [2024-11-15 11:02:58.772240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.979 [2024-11-15 11:02:58.772253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.979 [2024-11-15 11:02:58.772264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:58.905303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:58.905363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:12.236 [2024-11-15 11:02:58.905381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:58.905407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.023889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.023975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:12.236 [2024-11-15 11:02:59.023997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.024010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.024199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.024214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:12.236 [2024-11-15 11:02:59.024255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.024271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.024338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.024349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:12.236 [2024-11-15 11:02:59.024364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.024374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.024578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.024594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:12.236 [2024-11-15 11:02:59.024609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.024620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.024740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.024759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:12.236 [2024-11-15 11:02:59.024774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.024785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.024864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.024877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:12.236 [2024-11-15 11:02:59.024895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.024907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.024988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:12.236 [2024-11-15 11:02:59.025001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:12.236 [2024-11-15 11:02:59.025023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:12.236 [2024-11-15 11:02:59.025034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:12.236 [2024-11-15 11:02:59.025266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 563.991 ms, result 0 00:19:12.236 true 00:19:12.236 11:02:59 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75409 00:19:12.236 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 75409 ']' 00:19:12.236 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 75409 00:19:12.236 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:12.236 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.236 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75409 00:19:12.494 killing process with pid 75409 00:19:12.494 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.494 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.494 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75409' 00:19:12.494 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 75409 00:19:12.494 11:02:59 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 75409 00:19:17.760 11:03:04 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:18.696 65536+0 records in 00:19:18.696 65536+0 records out 00:19:18.696 268435456 bytes (268 MB, 256 MiB) copied, 1.04034 s, 258 MB/s 00:19:18.696 11:03:05 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:18.696 [2024-11-15 11:03:05.438671] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:18.696 [2024-11-15 11:03:05.438789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75618 ] 00:19:18.995 [2024-11-15 11:03:05.619940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.995 [2024-11-15 11:03:05.763194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.577 [2024-11-15 11:03:06.176653] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.577 [2024-11-15 11:03:06.176731] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:19.577 [2024-11-15 11:03:06.344825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.345113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:19.577 [2024-11-15 11:03:06.345144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.577 [2024-11-15 11:03:06.345156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.348658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.348700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:19.577 [2024-11-15 11:03:06.348713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.475 ms 00:19:19.577 [2024-11-15 11:03:06.348724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.348835] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:19.577 [2024-11-15 11:03:06.349836] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:19.577 [2024-11-15 11:03:06.349881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.349893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:19.577 [2024-11-15 11:03:06.349904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.058 ms 00:19:19.577 [2024-11-15 11:03:06.349915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.352545] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:19.577 [2024-11-15 11:03:06.372478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.372743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:19.577 [2024-11-15 11:03:06.372768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.967 ms 00:19:19.577 [2024-11-15 11:03:06.372780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.372888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.372903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:19.577 [2024-11-15 11:03:06.372915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:19.577 [2024-11-15 11:03:06.372927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.384944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.384978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:19.577 [2024-11-15 11:03:06.384992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.975 ms 00:19:19.577 [2024-11-15 11:03:06.385004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.385135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.385151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:19.577 [2024-11-15 11:03:06.385163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:19.577 [2024-11-15 11:03:06.385173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.385206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.385222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:19.577 [2024-11-15 11:03:06.385233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:19.577 [2024-11-15 11:03:06.385244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.577 [2024-11-15 11:03:06.385268] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:19.577 [2024-11-15 11:03:06.391017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.577 [2024-11-15 11:03:06.391048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:19.578 [2024-11-15 11:03:06.391061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.763 ms 00:19:19.578 [2024-11-15 11:03:06.391071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.578 [2024-11-15 11:03:06.391123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.578 [2024-11-15 11:03:06.391135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:19.578 [2024-11-15 11:03:06.391147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:19.578 [2024-11-15 11:03:06.391157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.578 [2024-11-15 11:03:06.391179] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:19.578 [2024-11-15 11:03:06.391208] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:19.578 [2024-11-15 11:03:06.391251] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:19.578 [2024-11-15 11:03:06.391269] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:19.578 [2024-11-15 11:03:06.391359] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:19.578 [2024-11-15 11:03:06.391372] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:19.578 [2024-11-15 11:03:06.391387] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:19.578 [2024-11-15 11:03:06.391401] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391417] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391429] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:19.578 [2024-11-15 11:03:06.391439] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:19.578 [2024-11-15 11:03:06.391448] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:19.578 [2024-11-15 11:03:06.391458] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:19.578 [2024-11-15 11:03:06.391469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.578 [2024-11-15 11:03:06.391479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:19.578 [2024-11-15 11:03:06.391489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:19:19.578 [2024-11-15 11:03:06.391499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.578 [2024-11-15 11:03:06.391597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.578 [2024-11-15 11:03:06.391611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:19.578 [2024-11-15 11:03:06.391625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:19:19.578 [2024-11-15 11:03:06.391636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.578 [2024-11-15 11:03:06.391725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:19.578 [2024-11-15 11:03:06.391738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:19.578 [2024-11-15 11:03:06.391750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:19.578 [2024-11-15 11:03:06.391781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:19.578 [2024-11-15 11:03:06.391809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.578 [2024-11-15 11:03:06.391829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:19.578 [2024-11-15 11:03:06.391839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:19.578 [2024-11-15 11:03:06.391847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:19.578 [2024-11-15 11:03:06.391867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:19.578 [2024-11-15 11:03:06.391877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:19.578 [2024-11-15 11:03:06.391886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:19.578 [2024-11-15 11:03:06.391904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:19.578 [2024-11-15 11:03:06.391931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:19.578 [2024-11-15 11:03:06.391958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.578 [2024-11-15 11:03:06.391976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:19.578 [2024-11-15 11:03:06.391985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:19.578 [2024-11-15 11:03:06.391994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.578 [2024-11-15 11:03:06.392003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:19.578 [2024-11-15 11:03:06.392011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:19.578 [2024-11-15 11:03:06.392020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:19.578 [2024-11-15 11:03:06.392029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:19.578 [2024-11-15 11:03:06.392038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:19.578 [2024-11-15 11:03:06.392048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.578 [2024-11-15 11:03:06.392057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:19.578 [2024-11-15 11:03:06.392066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:19.578 [2024-11-15 11:03:06.392075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:19.578 [2024-11-15 11:03:06.392084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:19.578 [2024-11-15 11:03:06.392093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:19.578 [2024-11-15 11:03:06.392101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.392110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:19.578 [2024-11-15 11:03:06.392121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:19.578 [2024-11-15 11:03:06.392130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.392139] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:19.578 [2024-11-15 11:03:06.392148] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:19.578 [2024-11-15 11:03:06.392158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:19.578 [2024-11-15 11:03:06.392172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:19.578 [2024-11-15 11:03:06.392182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:19.578 [2024-11-15 11:03:06.392192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:19.578 [2024-11-15 11:03:06.392201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:19.578 [2024-11-15 11:03:06.392211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:19.578 [2024-11-15 11:03:06.392220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:19.578 [2024-11-15 11:03:06.392229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:19.578 [2024-11-15 11:03:06.392239] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:19.578 [2024-11-15 11:03:06.392252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.578 [2024-11-15 11:03:06.392262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:19.578 [2024-11-15 11:03:06.392272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:19.578 [2024-11-15 11:03:06.392281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:19.578 [2024-11-15 11:03:06.392291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:19.578 [2024-11-15 11:03:06.392300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:19.578 [2024-11-15 11:03:06.392310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:19.578 [2024-11-15 11:03:06.392319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:19.578 [2024-11-15 11:03:06.392329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:19.578 [2024-11-15 11:03:06.392339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:19.578 [2024-11-15 11:03:06.392349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:19.578 [2024-11-15 11:03:06.392360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:19.578 [2024-11-15 11:03:06.392369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:19.578 [2024-11-15 11:03:06.392378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:19.578 [2024-11-15 11:03:06.392387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:19.578 [2024-11-15 11:03:06.392397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:19.578 [2024-11-15 11:03:06.392407] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:19.578 [2024-11-15 11:03:06.392418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:19.579 [2024-11-15 11:03:06.392428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:19.579 [2024-11-15 11:03:06.392443] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:19.579 [2024-11-15 11:03:06.392453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:19.579 [2024-11-15 11:03:06.392463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.579 [2024-11-15 11:03:06.392473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:19.579 [2024-11-15 11:03:06.392488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.792 ms 00:19:19.579 [2024-11-15 11:03:06.392499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.838 [2024-11-15 11:03:06.441638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.838 [2024-11-15 11:03:06.441685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:19.838 [2024-11-15 11:03:06.441701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.145 ms 00:19:19.838 [2024-11-15 11:03:06.441713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.838 [2024-11-15 11:03:06.441916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.838 [2024-11-15 11:03:06.441931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:19.838 [2024-11-15 11:03:06.441942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:19.838 [2024-11-15 11:03:06.441953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.838 [2024-11-15 11:03:06.511966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.838 [2024-11-15 11:03:06.512035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.838 [2024-11-15 11:03:06.512066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.098 ms 00:19:19.838 [2024-11-15 11:03:06.512087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.512264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.512291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.839 [2024-11-15 11:03:06.512311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:19.839 [2024-11-15 11:03:06.512331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.513139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.513172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.839 [2024-11-15 11:03:06.513184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:19:19.839 [2024-11-15 11:03:06.513204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.513354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.513369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.839 [2024-11-15 11:03:06.513381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:19.839 [2024-11-15 11:03:06.513392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.538088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.538135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.839 [2024-11-15 11:03:06.538153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.709 ms 00:19:19.839 [2024-11-15 11:03:06.538165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.558663] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:19.839 [2024-11-15 11:03:06.558714] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:19.839 [2024-11-15 11:03:06.558732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.558745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:19.839 [2024-11-15 11:03:06.558757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.427 ms 00:19:19.839 [2024-11-15 11:03:06.558769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.589718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.589778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:19.839 [2024-11-15 11:03:06.589815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.904 ms 00:19:19.839 [2024-11-15 11:03:06.589827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.607730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.607769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:19.839 [2024-11-15 11:03:06.607782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.838 ms 00:19:19.839 [2024-11-15 11:03:06.607793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.625072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.625324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:19.839 [2024-11-15 11:03:06.625357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.221 ms 00:19:19.839 [2024-11-15 11:03:06.625376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.839 [2024-11-15 11:03:06.626245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.839 [2024-11-15 11:03:06.626274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:19.839 [2024-11-15 11:03:06.626288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:19:19.839 [2024-11-15 11:03:06.626299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-11-15 11:03:06.726832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.097 [2024-11-15 11:03:06.727098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:20.097 [2024-11-15 11:03:06.727128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.662 ms 00:19:20.097 [2024-11-15 11:03:06.727140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.097 [2024-11-15 11:03:06.739530] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:20.097 [2024-11-15 11:03:06.766519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.097 [2024-11-15 11:03:06.766602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:20.097 [2024-11-15 11:03:06.766622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.273 ms 00:19:20.098 [2024-11-15 11:03:06.766634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.766841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.098 [2024-11-15 11:03:06.766861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:20.098 [2024-11-15 11:03:06.766874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:20.098 [2024-11-15 11:03:06.766885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.766961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.098 [2024-11-15 11:03:06.766974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:20.098 [2024-11-15 11:03:06.766986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:20.098 [2024-11-15 11:03:06.766997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.767033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.098 [2024-11-15 11:03:06.767045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:20.098 [2024-11-15 11:03:06.767061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:20.098 [2024-11-15 11:03:06.767072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.767112] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:20.098 [2024-11-15 11:03:06.767138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.098 [2024-11-15 11:03:06.767149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:20.098 [2024-11-15 11:03:06.767161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:20.098 [2024-11-15 11:03:06.767171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.805499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.098 [2024-11-15 11:03:06.805733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:20.098 [2024-11-15 11:03:06.805761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.358 ms 00:19:20.098 [2024-11-15 11:03:06.805773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.805976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.098 [2024-11-15 11:03:06.805995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:20.098 [2024-11-15 11:03:06.806008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:20.098 [2024-11-15 11:03:06.806019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.098 [2024-11-15 11:03:06.807458] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:20.098 [2024-11-15 11:03:06.812545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 463.016 ms, result 0 00:19:20.098 [2024-11-15 11:03:06.813586] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:20.098 [2024-11-15 11:03:06.832143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:21.033  [2024-11-15T11:03:09.270Z] Copying: 23/256 [MB] (23 MBps) [2024-11-15T11:03:09.837Z] Copying: 47/256 [MB] (23 MBps) [2024-11-15T11:03:11.216Z] Copying: 70/256 [MB] (23 MBps) [2024-11-15T11:03:12.153Z] Copying: 94/256 [MB] (23 MBps) [2024-11-15T11:03:13.090Z] Copying: 117/256 [MB] (23 MBps) [2024-11-15T11:03:14.027Z] Copying: 140/256 [MB] (23 MBps) [2024-11-15T11:03:14.982Z] Copying: 163/256 [MB] (23 MBps) [2024-11-15T11:03:15.918Z] Copying: 187/256 [MB] (23 MBps) [2024-11-15T11:03:16.853Z] Copying: 210/256 [MB] (23 MBps) [2024-11-15T11:03:17.859Z] Copying: 234/256 [MB] (23 MBps) [2024-11-15T11:03:17.859Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-15 11:03:17.744076] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.998 [2024-11-15 11:03:17.759843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.759906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:30.998 [2024-11-15 11:03:17.759925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:30.998 [2024-11-15 11:03:17.759936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.759961] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:30.998 [2024-11-15 11:03:17.764748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.764793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:30.998 [2024-11-15 11:03:17.764806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.776 ms 00:19:30.998 [2024-11-15 11:03:17.764816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.766781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.766822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:30.998 [2024-11-15 11:03:17.766836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:19:30.998 [2024-11-15 11:03:17.766857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.773867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.774099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:30.998 [2024-11-15 11:03:17.774132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.002 ms 00:19:30.998 [2024-11-15 11:03:17.774143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.779961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.780000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:30.998 [2024-11-15 11:03:17.780013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.784 ms 00:19:30.998 [2024-11-15 11:03:17.780023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.817248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.817289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:30.998 [2024-11-15 11:03:17.817303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.217 ms 00:19:30.998 [2024-11-15 11:03:17.817314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.839216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.839258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:30.998 [2024-11-15 11:03:17.839281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.879 ms 00:19:30.998 [2024-11-15 11:03:17.839297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.998 [2024-11-15 11:03:17.839447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.998 [2024-11-15 11:03:17.839461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:30.998 [2024-11-15 11:03:17.839473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:30.998 [2024-11-15 11:03:17.839485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.259 [2024-11-15 11:03:17.877105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.259 [2024-11-15 11:03:17.877142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.259 [2024-11-15 11:03:17.877157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.660 ms 00:19:31.259 [2024-11-15 11:03:17.877168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.259 [2024-11-15 11:03:17.914095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.259 [2024-11-15 11:03:17.914258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.259 [2024-11-15 11:03:17.914422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.928 ms 00:19:31.259 [2024-11-15 11:03:17.914439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.259 [2024-11-15 11:03:17.951560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.259 [2024-11-15 11:03:17.951599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.259 [2024-11-15 11:03:17.951614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.030 ms 00:19:31.259 [2024-11-15 11:03:17.951624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.259 [2024-11-15 11:03:17.988106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.259 [2024-11-15 11:03:17.988162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.259 [2024-11-15 11:03:17.988177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.452 ms 00:19:31.259 [2024-11-15 11:03:17.988187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.259 [2024-11-15 11:03:17.988248] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.259 [2024-11-15 11:03:17.988275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.259 [2024-11-15 11:03:17.988713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.988999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.260 [2024-11-15 11:03:17.989480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.260 [2024-11-15 11:03:17.989492] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:31.260 [2024-11-15 11:03:17.989504] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.260 [2024-11-15 11:03:17.989514] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.260 [2024-11-15 11:03:17.989545] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.260 [2024-11-15 11:03:17.989557] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.260 [2024-11-15 11:03:17.989567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.260 [2024-11-15 11:03:17.989578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.260 [2024-11-15 11:03:17.989589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.260 [2024-11-15 11:03:17.989598] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.260 [2024-11-15 11:03:17.989608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.260 [2024-11-15 11:03:17.989618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.260 [2024-11-15 11:03:17.989630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.260 [2024-11-15 11:03:17.989646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:19:31.260 [2024-11-15 11:03:17.989656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.260 [2024-11-15 11:03:18.011087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.260 [2024-11-15 11:03:18.011123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.260 [2024-11-15 11:03:18.011137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.442 ms 00:19:31.260 [2024-11-15 11:03:18.011148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.260 [2024-11-15 11:03:18.011767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.260 [2024-11-15 11:03:18.011792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.260 [2024-11-15 11:03:18.011804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:19:31.260 [2024-11-15 11:03:18.011815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.260 [2024-11-15 11:03:18.071637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.260 [2024-11-15 11:03:18.071678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.260 [2024-11-15 11:03:18.071704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.260 [2024-11-15 11:03:18.071717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.260 [2024-11-15 11:03:18.071819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.260 [2024-11-15 11:03:18.071836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.260 [2024-11-15 11:03:18.071848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.260 [2024-11-15 11:03:18.071859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.260 [2024-11-15 11:03:18.071936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.260 [2024-11-15 11:03:18.071954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.261 [2024-11-15 11:03:18.071965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.261 [2024-11-15 11:03:18.071977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.261 [2024-11-15 11:03:18.071997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.261 [2024-11-15 11:03:18.072008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.261 [2024-11-15 11:03:18.072023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.261 [2024-11-15 11:03:18.072033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.209280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.209363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.520 [2024-11-15 11:03:18.209390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.209401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.317392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.317466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.520 [2024-11-15 11:03:18.317490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.317502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.317705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.317723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.520 [2024-11-15 11:03:18.317735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.317746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.317781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.317793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.520 [2024-11-15 11:03:18.317805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.317820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.317952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.317967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.520 [2024-11-15 11:03:18.317979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.317990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.318047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.318060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.520 [2024-11-15 11:03:18.318071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.318081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.318139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.318152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.520 [2024-11-15 11:03:18.318163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.318174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.318226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.520 [2024-11-15 11:03:18.318239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.520 [2024-11-15 11:03:18.318251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.520 [2024-11-15 11:03:18.318265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.520 [2024-11-15 11:03:18.318438] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.502 ms, result 0 00:19:32.900 00:19:32.900 00:19:32.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.900 11:03:19 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75766 00:19:32.900 11:03:19 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75766 00:19:32.900 11:03:19 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 75766 ']' 00:19:32.900 11:03:19 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.900 11:03:19 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.900 11:03:19 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.900 11:03:19 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.900 11:03:19 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:32.900 11:03:19 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:33.159 [2024-11-15 11:03:19.762560] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:33.159 [2024-11-15 11:03:19.762685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75766 ] 00:19:33.159 [2024-11-15 11:03:19.943103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.418 [2024-11-15 11:03:20.087300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.356 11:03:21 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.356 11:03:21 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:34.356 11:03:21 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:34.615 [2024-11-15 11:03:21.346158] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.615 [2024-11-15 11:03:21.346451] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.875 [2024-11-15 11:03:21.531253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.531333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.875 [2024-11-15 11:03:21.531355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:34.875 [2024-11-15 11:03:21.531366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.535135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.535311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.875 [2024-11-15 11:03:21.535339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:19:34.875 [2024-11-15 11:03:21.535351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.535467] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.875 [2024-11-15 11:03:21.536455] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.875 [2024-11-15 11:03:21.536490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.536502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.875 [2024-11-15 11:03:21.536516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.039 ms 00:19:34.875 [2024-11-15 11:03:21.536539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.538997] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:34.875 [2024-11-15 11:03:21.560095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.560140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:34.875 [2024-11-15 11:03:21.560155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.139 ms 00:19:34.875 [2024-11-15 11:03:21.560170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.560273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.560292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:34.875 [2024-11-15 11:03:21.560304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:34.875 [2024-11-15 11:03:21.560318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.572432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.572475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.875 [2024-11-15 11:03:21.572489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.077 ms 00:19:34.875 [2024-11-15 11:03:21.572509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.572696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.572720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.875 [2024-11-15 11:03:21.572732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:34.875 [2024-11-15 11:03:21.572747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.572793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.572808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.875 [2024-11-15 11:03:21.572819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:34.875 [2024-11-15 11:03:21.572833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.572862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:34.875 [2024-11-15 11:03:21.578882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.578917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.875 [2024-11-15 11:03:21.578933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.033 ms 00:19:34.875 [2024-11-15 11:03:21.578944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.579005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.579018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.875 [2024-11-15 11:03:21.579033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:34.875 [2024-11-15 11:03:21.579047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.579075] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:34.875 [2024-11-15 11:03:21.579102] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:34.875 [2024-11-15 11:03:21.579153] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:34.875 [2024-11-15 11:03:21.579184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:34.875 [2024-11-15 11:03:21.579299] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.875 [2024-11-15 11:03:21.579314] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.875 [2024-11-15 11:03:21.579331] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:34.875 [2024-11-15 11:03:21.579349] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.875 [2024-11-15 11:03:21.579365] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.875 [2024-11-15 11:03:21.579377] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:34.875 [2024-11-15 11:03:21.579391] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.875 [2024-11-15 11:03:21.579401] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.875 [2024-11-15 11:03:21.579418] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.875 [2024-11-15 11:03:21.579429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.579442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.875 [2024-11-15 11:03:21.579453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:19:34.875 [2024-11-15 11:03:21.579466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.579547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.875 [2024-11-15 11:03:21.579578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.875 [2024-11-15 11:03:21.579589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:34.875 [2024-11-15 11:03:21.579603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.875 [2024-11-15 11:03:21.579704] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.875 [2024-11-15 11:03:21.579722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.875 [2024-11-15 11:03:21.579734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.876 [2024-11-15 11:03:21.579748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.579759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.876 [2024-11-15 11:03:21.579774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.579784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:34.876 [2024-11-15 11:03:21.579801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.876 [2024-11-15 11:03:21.579810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:34.876 [2024-11-15 11:03:21.579823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.876 [2024-11-15 11:03:21.579833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.876 [2024-11-15 11:03:21.579847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:34.876 [2024-11-15 11:03:21.579856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.876 [2024-11-15 11:03:21.579869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.876 [2024-11-15 11:03:21.579879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:34.876 [2024-11-15 11:03:21.579892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.579901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.876 [2024-11-15 11:03:21.579914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:34.876 [2024-11-15 11:03:21.579923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.579937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.876 [2024-11-15 11:03:21.579956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:34.876 [2024-11-15 11:03:21.579970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.876 [2024-11-15 11:03:21.579980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.876 [2024-11-15 11:03:21.579996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.876 [2024-11-15 11:03:21.580018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.876 [2024-11-15 11:03:21.580028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.876 [2024-11-15 11:03:21.580049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.876 [2024-11-15 11:03:21.580063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.876 [2024-11-15 11:03:21.580085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.876 [2024-11-15 11:03:21.580094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.876 [2024-11-15 11:03:21.580115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.876 [2024-11-15 11:03:21.580127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:34.876 [2024-11-15 11:03:21.580136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.876 [2024-11-15 11:03:21.580147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.876 [2024-11-15 11:03:21.580156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:34.876 [2024-11-15 11:03:21.580171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.876 [2024-11-15 11:03:21.580192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:34.876 [2024-11-15 11:03:21.580202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580217] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.876 [2024-11-15 11:03:21.580228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.876 [2024-11-15 11:03:21.580245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.876 [2024-11-15 11:03:21.580255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.876 [2024-11-15 11:03:21.580269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.876 [2024-11-15 11:03:21.580278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.876 [2024-11-15 11:03:21.580290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.876 [2024-11-15 11:03:21.580300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.876 [2024-11-15 11:03:21.580312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.876 [2024-11-15 11:03:21.580321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.876 [2024-11-15 11:03:21.580345] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.876 [2024-11-15 11:03:21.580359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580378] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:34.876 [2024-11-15 11:03:21.580390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:34.876 [2024-11-15 11:03:21.580404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:34.876 [2024-11-15 11:03:21.580415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:34.876 [2024-11-15 11:03:21.580428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:34.876 [2024-11-15 11:03:21.580438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:34.876 [2024-11-15 11:03:21.580451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:34.876 [2024-11-15 11:03:21.580462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:34.876 [2024-11-15 11:03:21.580475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:34.876 [2024-11-15 11:03:21.580485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:34.876 [2024-11-15 11:03:21.580563] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.876 [2024-11-15 11:03:21.580576] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.876 [2024-11-15 11:03:21.580607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.876 [2024-11-15 11:03:21.580621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.876 [2024-11-15 11:03:21.580633] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.876 [2024-11-15 11:03:21.580648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.876 [2024-11-15 11:03:21.580660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.876 [2024-11-15 11:03:21.580673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:19:34.876 [2024-11-15 11:03:21.580685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.876 [2024-11-15 11:03:21.632226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.876 [2024-11-15 11:03:21.632268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.876 [2024-11-15 11:03:21.632293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.533 ms 00:19:34.876 [2024-11-15 11:03:21.632305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.876 [2024-11-15 11:03:21.632504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.876 [2024-11-15 11:03:21.632519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:34.877 [2024-11-15 11:03:21.632553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:34.877 [2024-11-15 11:03:21.632565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.877 [2024-11-15 11:03:21.689888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.877 [2024-11-15 11:03:21.690091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.877 [2024-11-15 11:03:21.690122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.382 ms 00:19:34.877 [2024-11-15 11:03:21.690134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.877 [2024-11-15 11:03:21.690236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.877 [2024-11-15 11:03:21.690250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.877 [2024-11-15 11:03:21.690265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.877 [2024-11-15 11:03:21.690276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.877 [2024-11-15 11:03:21.691070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.877 [2024-11-15 11:03:21.691084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.877 [2024-11-15 11:03:21.691103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:19:34.877 [2024-11-15 11:03:21.691114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.877 [2024-11-15 11:03:21.691258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.877 [2024-11-15 11:03:21.691278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.877 [2024-11-15 11:03:21.691297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:34.877 [2024-11-15 11:03:21.691307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.877 [2024-11-15 11:03:21.718827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.877 [2024-11-15 11:03:21.719013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.877 [2024-11-15 11:03:21.719042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.531 ms 00:19:34.877 [2024-11-15 11:03:21.719055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.136 [2024-11-15 11:03:21.740413] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:35.137 [2024-11-15 11:03:21.740454] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.137 [2024-11-15 11:03:21.740475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.740487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.137 [2024-11-15 11:03:21.740502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.309 ms 00:19:35.137 [2024-11-15 11:03:21.740513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.772129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.772170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.137 [2024-11-15 11:03:21.772188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.542 ms 00:19:35.137 [2024-11-15 11:03:21.772199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.791302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.791341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.137 [2024-11-15 11:03:21.791363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.040 ms 00:19:35.137 [2024-11-15 11:03:21.791374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.809973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.810147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.137 [2024-11-15 11:03:21.810175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.547 ms 00:19:35.137 [2024-11-15 11:03:21.810186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.811066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.811100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.137 [2024-11-15 11:03:21.811116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:19:35.137 [2024-11-15 11:03:21.811127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.923542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.923643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.137 [2024-11-15 11:03:21.923671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.545 ms 00:19:35.137 [2024-11-15 11:03:21.923684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.935398] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:35.137 [2024-11-15 11:03:21.961880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.961964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.137 [2024-11-15 11:03:21.961991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.113 ms 00:19:35.137 [2024-11-15 11:03:21.962008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.962192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.962212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.137 [2024-11-15 11:03:21.962224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:35.137 [2024-11-15 11:03:21.962241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.962318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.962336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.137 [2024-11-15 11:03:21.962348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:35.137 [2024-11-15 11:03:21.962364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.962399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.962418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.137 [2024-11-15 11:03:21.962429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:35.137 [2024-11-15 11:03:21.962451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.137 [2024-11-15 11:03:21.962516] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.137 [2024-11-15 11:03:21.962563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.137 [2024-11-15 11:03:21.962576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.137 [2024-11-15 11:03:21.962600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:35.137 [2024-11-15 11:03:21.962611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.396 [2024-11-15 11:03:22.000379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.396 [2024-11-15 11:03:22.000422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.396 [2024-11-15 11:03:22.000444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.785 ms 00:19:35.396 [2024-11-15 11:03:22.000456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.396 [2024-11-15 11:03:22.000607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.396 [2024-11-15 11:03:22.000623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.396 [2024-11-15 11:03:22.000639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:35.396 [2024-11-15 11:03:22.000654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.396 [2024-11-15 11:03:22.002077] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:35.396 [2024-11-15 11:03:22.006452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 471.245 ms, result 0 00:19:35.396 [2024-11-15 11:03:22.007813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:35.396 Some configs were skipped because the RPC state that can call them passed over. 00:19:35.396 11:03:22 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:35.396 [2024-11-15 11:03:22.255588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.396 [2024-11-15 11:03:22.255866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:35.396 [2024-11-15 11:03:22.255954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:19:35.396 [2024-11-15 11:03:22.256000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.655 [2024-11-15 11:03:22.256150] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.137 ms, result 0 00:19:35.655 true 00:19:35.655 11:03:22 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:35.655 [2024-11-15 11:03:22.463190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.655 [2024-11-15 11:03:22.463255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:35.655 [2024-11-15 11:03:22.463275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.290 ms 00:19:35.655 [2024-11-15 11:03:22.463287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.655 [2024-11-15 11:03:22.463334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.439 ms, result 0 00:19:35.655 true 00:19:35.655 11:03:22 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75766 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 75766 ']' 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 75766 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75766 00:19:35.655 killing process with pid 75766 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75766' 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 75766 00:19:35.655 11:03:22 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 75766 00:19:37.033 [2024-11-15 11:03:23.751181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.751281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.033 [2024-11-15 11:03:23.751298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.033 [2024-11-15 11:03:23.751311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.751339] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:37.033 [2024-11-15 11:03:23.756071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.756109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.033 [2024-11-15 11:03:23.756128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:19:37.033 [2024-11-15 11:03:23.756138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.756424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.756438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.033 [2024-11-15 11:03:23.756451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:19:37.033 [2024-11-15 11:03:23.756462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.759846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.759885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.033 [2024-11-15 11:03:23.759903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.363 ms 00:19:37.033 [2024-11-15 11:03:23.759913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.765574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.765748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.033 [2024-11-15 11:03:23.765775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.624 ms 00:19:37.033 [2024-11-15 11:03:23.765786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.781719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.781753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.033 [2024-11-15 11:03:23.781773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.881 ms 00:19:37.033 [2024-11-15 11:03:23.781794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.792944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.793104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.033 [2024-11-15 11:03:23.793134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.094 ms 00:19:37.033 [2024-11-15 11:03:23.793145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.793309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.793322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.033 [2024-11-15 11:03:23.793336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:19:37.033 [2024-11-15 11:03:23.793346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.809394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.809429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.033 [2024-11-15 11:03:23.809444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.047 ms 00:19:37.033 [2024-11-15 11:03:23.809454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.824504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.824550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.033 [2024-11-15 11:03:23.824576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.007 ms 00:19:37.033 [2024-11-15 11:03:23.824586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.839347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.839383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.033 [2024-11-15 11:03:23.839406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.677 ms 00:19:37.033 [2024-11-15 11:03:23.839417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.854487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.033 [2024-11-15 11:03:23.854631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.033 [2024-11-15 11:03:23.854662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.002 ms 00:19:37.033 [2024-11-15 11:03:23.854673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.033 [2024-11-15 11:03:23.854759] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.033 [2024-11-15 11:03:23.854779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.854992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.855003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.855016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.855026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.855042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.033 [2024-11-15 11:03:23.855053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.855989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.034 [2024-11-15 11:03:23.856093] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.034 [2024-11-15 11:03:23.856131] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:37.034 [2024-11-15 11:03:23.856154] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.034 [2024-11-15 11:03:23.856173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.034 [2024-11-15 11:03:23.856183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.034 [2024-11-15 11:03:23.856197] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.034 [2024-11-15 11:03:23.856207] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.035 [2024-11-15 11:03:23.856221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.035 [2024-11-15 11:03:23.856231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.035 [2024-11-15 11:03:23.856244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.035 [2024-11-15 11:03:23.856253] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.035 [2024-11-15 11:03:23.856267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.035 [2024-11-15 11:03:23.856277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.035 [2024-11-15 11:03:23.856291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:19:37.035 [2024-11-15 11:03:23.856301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.035 [2024-11-15 11:03:23.877460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.035 [2024-11-15 11:03:23.877494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.035 [2024-11-15 11:03:23.877514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.161 ms 00:19:37.035 [2024-11-15 11:03:23.877538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.035 [2024-11-15 11:03:23.878212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.035 [2024-11-15 11:03:23.878235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.035 [2024-11-15 11:03:23.878267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.618 ms 00:19:37.035 [2024-11-15 11:03:23.878282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.294 [2024-11-15 11:03:23.953255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.294 [2024-11-15 11:03:23.953296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.294 [2024-11-15 11:03:23.953316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.295 [2024-11-15 11:03:23.953328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.295 [2024-11-15 11:03:23.953473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.295 [2024-11-15 11:03:23.953486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.295 [2024-11-15 11:03:23.953504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.295 [2024-11-15 11:03:23.953521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.295 [2024-11-15 11:03:23.953614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.295 [2024-11-15 11:03:23.953627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.295 [2024-11-15 11:03:23.953651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.295 [2024-11-15 11:03:23.953662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.295 [2024-11-15 11:03:23.953691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.295 [2024-11-15 11:03:23.953702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.295 [2024-11-15 11:03:23.953718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.295 [2024-11-15 11:03:23.953728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.295 [2024-11-15 11:03:24.091558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.295 [2024-11-15 11:03:24.091630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.295 [2024-11-15 11:03:24.091652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.295 [2024-11-15 11:03:24.091664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.554 [2024-11-15 11:03:24.200466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.554 [2024-11-15 11:03:24.200552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.554 [2024-11-15 11:03:24.200574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.554 [2024-11-15 11:03:24.200591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.200752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.555 [2024-11-15 11:03:24.200766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.555 [2024-11-15 11:03:24.200785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.555 [2024-11-15 11:03:24.200796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.200831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.555 [2024-11-15 11:03:24.200843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.555 [2024-11-15 11:03:24.200857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.555 [2024-11-15 11:03:24.200867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.201002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.555 [2024-11-15 11:03:24.201016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.555 [2024-11-15 11:03:24.201030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.555 [2024-11-15 11:03:24.201041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.201087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.555 [2024-11-15 11:03:24.201100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.555 [2024-11-15 11:03:24.201114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.555 [2024-11-15 11:03:24.201125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.201176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.555 [2024-11-15 11:03:24.201192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.555 [2024-11-15 11:03:24.201209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.555 [2024-11-15 11:03:24.201219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.201276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.555 [2024-11-15 11:03:24.201288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.555 [2024-11-15 11:03:24.201303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.555 [2024-11-15 11:03:24.201314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.555 [2024-11-15 11:03:24.201490] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 451.006 ms, result 0 00:19:38.489 11:03:25 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:38.489 11:03:25 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:38.748 [2024-11-15 11:03:25.396434] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:38.748 [2024-11-15 11:03:25.396573] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75835 ] 00:19:38.748 [2024-11-15 11:03:25.579105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.007 [2024-11-15 11:03:25.721028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.577 [2024-11-15 11:03:26.134354] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.577 [2024-11-15 11:03:26.134432] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.577 [2024-11-15 11:03:26.301089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.301152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:39.577 [2024-11-15 11:03:26.301170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:39.577 [2024-11-15 11:03:26.301182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.304742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.304783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.577 [2024-11-15 11:03:26.304796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.544 ms 00:19:39.577 [2024-11-15 11:03:26.304806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.304911] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:39.577 [2024-11-15 11:03:26.305923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:39.577 [2024-11-15 11:03:26.305961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.305974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.577 [2024-11-15 11:03:26.305985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:19:39.577 [2024-11-15 11:03:26.305996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.308426] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:39.577 [2024-11-15 11:03:26.328508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.328557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:39.577 [2024-11-15 11:03:26.328572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.115 ms 00:19:39.577 [2024-11-15 11:03:26.328599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.328701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.328717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:39.577 [2024-11-15 11:03:26.328729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:39.577 [2024-11-15 11:03:26.328740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.340921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.340948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.577 [2024-11-15 11:03:26.340962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.158 ms 00:19:39.577 [2024-11-15 11:03:26.340973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.341096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.341112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.577 [2024-11-15 11:03:26.341123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:39.577 [2024-11-15 11:03:26.341134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.341165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.341181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:39.577 [2024-11-15 11:03:26.341192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:39.577 [2024-11-15 11:03:26.341202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.341227] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:39.577 [2024-11-15 11:03:26.346981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.347013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.577 [2024-11-15 11:03:26.347026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.770 ms 00:19:39.577 [2024-11-15 11:03:26.347052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.347106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.347118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:39.577 [2024-11-15 11:03:26.347130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:39.577 [2024-11-15 11:03:26.347141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.347166] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:39.577 [2024-11-15 11:03:26.347194] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:39.577 [2024-11-15 11:03:26.347231] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:39.577 [2024-11-15 11:03:26.347252] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:39.577 [2024-11-15 11:03:26.347345] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:39.577 [2024-11-15 11:03:26.347359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:39.577 [2024-11-15 11:03:26.347374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:39.577 [2024-11-15 11:03:26.347387] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347404] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347416] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:39.577 [2024-11-15 11:03:26.347427] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:39.577 [2024-11-15 11:03:26.347438] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:39.577 [2024-11-15 11:03:26.347449] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:39.577 [2024-11-15 11:03:26.347459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.347470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:39.577 [2024-11-15 11:03:26.347481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:19:39.577 [2024-11-15 11:03:26.347491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.347579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.577 [2024-11-15 11:03:26.347592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:39.577 [2024-11-15 11:03:26.347608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:39.577 [2024-11-15 11:03:26.347619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.577 [2024-11-15 11:03:26.347707] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:39.577 [2024-11-15 11:03:26.347720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:39.577 [2024-11-15 11:03:26.347732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:39.577 [2024-11-15 11:03:26.347765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:39.577 [2024-11-15 11:03:26.347794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.577 [2024-11-15 11:03:26.347813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:39.577 [2024-11-15 11:03:26.347823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:39.577 [2024-11-15 11:03:26.347833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.577 [2024-11-15 11:03:26.347855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:39.577 [2024-11-15 11:03:26.347865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:39.577 [2024-11-15 11:03:26.347875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:39.577 [2024-11-15 11:03:26.347894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:39.577 [2024-11-15 11:03:26.347924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:39.577 [2024-11-15 11:03:26.347952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:39.577 [2024-11-15 11:03:26.347979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:39.577 [2024-11-15 11:03:26.347988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.577 [2024-11-15 11:03:26.347997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:39.578 [2024-11-15 11:03:26.348006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:39.578 [2024-11-15 11:03:26.348016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.578 [2024-11-15 11:03:26.348025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:39.578 [2024-11-15 11:03:26.348033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:39.578 [2024-11-15 11:03:26.348042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.578 [2024-11-15 11:03:26.348051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:39.578 [2024-11-15 11:03:26.348060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:39.578 [2024-11-15 11:03:26.348070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.578 [2024-11-15 11:03:26.348078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:39.578 [2024-11-15 11:03:26.348087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:39.578 [2024-11-15 11:03:26.348097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.578 [2024-11-15 11:03:26.348113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:39.578 [2024-11-15 11:03:26.348123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:39.578 [2024-11-15 11:03:26.348132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.578 [2024-11-15 11:03:26.348142] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:39.578 [2024-11-15 11:03:26.348154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:39.578 [2024-11-15 11:03:26.348164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.578 [2024-11-15 11:03:26.348179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.578 [2024-11-15 11:03:26.348189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:39.578 [2024-11-15 11:03:26.348200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:39.578 [2024-11-15 11:03:26.348209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:39.578 [2024-11-15 11:03:26.348219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:39.578 [2024-11-15 11:03:26.348230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:39.578 [2024-11-15 11:03:26.348239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:39.578 [2024-11-15 11:03:26.348251] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:39.578 [2024-11-15 11:03:26.348265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:39.578 [2024-11-15 11:03:26.348288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:39.578 [2024-11-15 11:03:26.348299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:39.578 [2024-11-15 11:03:26.348309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:39.578 [2024-11-15 11:03:26.348320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:39.578 [2024-11-15 11:03:26.348331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:39.578 [2024-11-15 11:03:26.348341] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:39.578 [2024-11-15 11:03:26.348352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:39.578 [2024-11-15 11:03:26.348362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:39.578 [2024-11-15 11:03:26.348373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:39.578 [2024-11-15 11:03:26.348428] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:39.578 [2024-11-15 11:03:26.348440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:39.578 [2024-11-15 11:03:26.348462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:39.578 [2024-11-15 11:03:26.348472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:39.578 [2024-11-15 11:03:26.348482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:39.578 [2024-11-15 11:03:26.348495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.578 [2024-11-15 11:03:26.348506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:39.578 [2024-11-15 11:03:26.348533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.843 ms 00:19:39.578 [2024-11-15 11:03:26.348544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.578 [2024-11-15 11:03:26.397564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.578 [2024-11-15 11:03:26.397615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.578 [2024-11-15 11:03:26.397632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.037 ms 00:19:39.578 [2024-11-15 11:03:26.397643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.578 [2024-11-15 11:03:26.397853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.578 [2024-11-15 11:03:26.397868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.578 [2024-11-15 11:03:26.397880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:39.578 [2024-11-15 11:03:26.397891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.462257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.462313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.838 [2024-11-15 11:03:26.462333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.441 ms 00:19:39.838 [2024-11-15 11:03:26.462345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.462474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.462489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.838 [2024-11-15 11:03:26.462501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.838 [2024-11-15 11:03:26.462511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.463211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.463227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.838 [2024-11-15 11:03:26.463240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:19:39.838 [2024-11-15 11:03:26.463258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.463399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.463414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.838 [2024-11-15 11:03:26.463425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:19:39.838 [2024-11-15 11:03:26.463436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.487519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.487571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.838 [2024-11-15 11:03:26.487588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.096 ms 00:19:39.838 [2024-11-15 11:03:26.487600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.508383] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:39.838 [2024-11-15 11:03:26.508429] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:39.838 [2024-11-15 11:03:26.508447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.508459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:39.838 [2024-11-15 11:03:26.508472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.710 ms 00:19:39.838 [2024-11-15 11:03:26.508483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.539428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.539488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:39.838 [2024-11-15 11:03:26.539503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.862 ms 00:19:39.838 [2024-11-15 11:03:26.539532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.558188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.558230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:39.838 [2024-11-15 11:03:26.558244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:19:39.838 [2024-11-15 11:03:26.558256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.576735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.576772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:39.838 [2024-11-15 11:03:26.576786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.423 ms 00:19:39.838 [2024-11-15 11:03:26.576796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.577666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.577697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.838 [2024-11-15 11:03:26.577710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:19:39.838 [2024-11-15 11:03:26.577722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.674925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.838 [2024-11-15 11:03:26.675002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:39.838 [2024-11-15 11:03:26.675034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.325 ms 00:19:39.838 [2024-11-15 11:03:26.675046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.838 [2024-11-15 11:03:26.685980] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.098 [2024-11-15 11:03:26.712216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.712279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.098 [2024-11-15 11:03:26.712299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.118 ms 00:19:40.098 [2024-11-15 11:03:26.712318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.712491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.712506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.098 [2024-11-15 11:03:26.712520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:40.098 [2024-11-15 11:03:26.712546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.712623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.712635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.098 [2024-11-15 11:03:26.712647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:40.098 [2024-11-15 11:03:26.712663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.712699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.712711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.098 [2024-11-15 11:03:26.712723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:40.098 [2024-11-15 11:03:26.712733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.712775] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.098 [2024-11-15 11:03:26.712789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.712800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.098 [2024-11-15 11:03:26.712810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:40.098 [2024-11-15 11:03:26.712820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.750913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.750961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.098 [2024-11-15 11:03:26.750977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.126 ms 00:19:40.098 [2024-11-15 11:03:26.750989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.751120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.098 [2024-11-15 11:03:26.751135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.098 [2024-11-15 11:03:26.751148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:40.098 [2024-11-15 11:03:26.751159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.098 [2024-11-15 11:03:26.752432] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.098 [2024-11-15 11:03:26.756841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 451.713 ms, result 0 00:19:40.098 [2024-11-15 11:03:26.757741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.098 [2024-11-15 11:03:26.776353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.035  [2024-11-15T11:03:28.859Z] Copying: 28/256 [MB] (28 MBps) [2024-11-15T11:03:29.796Z] Copying: 53/256 [MB] (25 MBps) [2024-11-15T11:03:31.176Z] Copying: 79/256 [MB] (25 MBps) [2024-11-15T11:03:32.113Z] Copying: 104/256 [MB] (25 MBps) [2024-11-15T11:03:33.047Z] Copying: 129/256 [MB] (25 MBps) [2024-11-15T11:03:33.982Z] Copying: 153/256 [MB] (24 MBps) [2024-11-15T11:03:34.918Z] Copying: 177/256 [MB] (24 MBps) [2024-11-15T11:03:35.864Z] Copying: 202/256 [MB] (24 MBps) [2024-11-15T11:03:36.800Z] Copying: 226/256 [MB] (24 MBps) [2024-11-15T11:03:37.058Z] Copying: 250/256 [MB] (23 MBps) [2024-11-15T11:03:37.058Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-15 11:03:36.983817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:50.197 [2024-11-15 11:03:36.998426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.197 [2024-11-15 11:03:36.998476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:50.197 [2024-11-15 11:03:36.998495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:50.197 [2024-11-15 11:03:36.998520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-11-15 11:03:36.998555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:50.197 [2024-11-15 11:03:37.003162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.197 [2024-11-15 11:03:37.003206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:50.197 [2024-11-15 11:03:37.003220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.596 ms 00:19:50.197 [2024-11-15 11:03:37.003230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-11-15 11:03:37.003456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.197 [2024-11-15 11:03:37.003471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:50.197 [2024-11-15 11:03:37.003481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:19:50.197 [2024-11-15 11:03:37.003491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-11-15 11:03:37.006249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.197 [2024-11-15 11:03:37.006284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:50.197 [2024-11-15 11:03:37.006296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.747 ms 00:19:50.197 [2024-11-15 11:03:37.006307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-11-15 11:03:37.011608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.197 [2024-11-15 11:03:37.011642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:50.197 [2024-11-15 11:03:37.011653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.290 ms 00:19:50.197 [2024-11-15 11:03:37.011664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.197 [2024-11-15 11:03:37.045596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.197 [2024-11-15 11:03:37.045632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:50.197 [2024-11-15 11:03:37.045645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.913 ms 00:19:50.197 [2024-11-15 11:03:37.045655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.456 [2024-11-15 11:03:37.066394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.457 [2024-11-15 11:03:37.066435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:50.457 [2024-11-15 11:03:37.066459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.733 ms 00:19:50.457 [2024-11-15 11:03:37.066470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.457 [2024-11-15 11:03:37.066617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.457 [2024-11-15 11:03:37.066631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:50.457 [2024-11-15 11:03:37.066642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:50.457 [2024-11-15 11:03:37.066652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.457 [2024-11-15 11:03:37.101959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.457 [2024-11-15 11:03:37.101996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:50.457 [2024-11-15 11:03:37.102007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.328 ms 00:19:50.457 [2024-11-15 11:03:37.102017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.457 [2024-11-15 11:03:37.135203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.457 [2024-11-15 11:03:37.135239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:50.457 [2024-11-15 11:03:37.135251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.174 ms 00:19:50.457 [2024-11-15 11:03:37.135260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.457 [2024-11-15 11:03:37.168677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.457 [2024-11-15 11:03:37.168713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:50.457 [2024-11-15 11:03:37.168725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.433 ms 00:19:50.457 [2024-11-15 11:03:37.168734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.457 [2024-11-15 11:03:37.202116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.457 [2024-11-15 11:03:37.202149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:50.457 [2024-11-15 11:03:37.202161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.370 ms 00:19:50.457 [2024-11-15 11:03:37.202170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.457 [2024-11-15 11:03:37.202219] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:50.457 [2024-11-15 11:03:37.202236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:50.457 [2024-11-15 11:03:37.202889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.202999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:50.458 [2024-11-15 11:03:37.203256] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:50.458 [2024-11-15 11:03:37.203265] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:50.458 [2024-11-15 11:03:37.203276] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:50.458 [2024-11-15 11:03:37.203285] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:50.458 [2024-11-15 11:03:37.203295] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:50.458 [2024-11-15 11:03:37.203304] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:50.458 [2024-11-15 11:03:37.203315] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:50.458 [2024-11-15 11:03:37.203325] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:50.458 [2024-11-15 11:03:37.203339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:50.458 [2024-11-15 11:03:37.203347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:50.458 [2024-11-15 11:03:37.203355] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:50.458 [2024-11-15 11:03:37.203364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.458 [2024-11-15 11:03:37.203374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:50.458 [2024-11-15 11:03:37.203384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.149 ms 00:19:50.458 [2024-11-15 11:03:37.203394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.458 [2024-11-15 11:03:37.222876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.458 [2024-11-15 11:03:37.222912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:50.458 [2024-11-15 11:03:37.222926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.495 ms 00:19:50.458 [2024-11-15 11:03:37.222936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.458 [2024-11-15 11:03:37.223581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:50.458 [2024-11-15 11:03:37.223607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:50.458 [2024-11-15 11:03:37.223619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:19:50.458 [2024-11-15 11:03:37.223629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.458 [2024-11-15 11:03:37.280963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.458 [2024-11-15 11:03:37.280996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:50.458 [2024-11-15 11:03:37.281009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.458 [2024-11-15 11:03:37.281030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.458 [2024-11-15 11:03:37.281111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.458 [2024-11-15 11:03:37.281123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:50.458 [2024-11-15 11:03:37.281133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.458 [2024-11-15 11:03:37.281143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.458 [2024-11-15 11:03:37.281198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.458 [2024-11-15 11:03:37.281212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:50.458 [2024-11-15 11:03:37.281223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.458 [2024-11-15 11:03:37.281232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.458 [2024-11-15 11:03:37.281260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.458 [2024-11-15 11:03:37.281271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:50.458 [2024-11-15 11:03:37.281281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.458 [2024-11-15 11:03:37.281291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.406861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.716 [2024-11-15 11:03:37.406918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:50.716 [2024-11-15 11:03:37.406934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.716 [2024-11-15 11:03:37.406945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.505592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.716 [2024-11-15 11:03:37.505643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:50.716 [2024-11-15 11:03:37.505658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.716 [2024-11-15 11:03:37.505670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.505749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.716 [2024-11-15 11:03:37.505762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:50.716 [2024-11-15 11:03:37.505773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.716 [2024-11-15 11:03:37.505784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.505816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.716 [2024-11-15 11:03:37.505839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:50.716 [2024-11-15 11:03:37.505850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.716 [2024-11-15 11:03:37.505861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.505976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.716 [2024-11-15 11:03:37.505991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:50.716 [2024-11-15 11:03:37.506002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.716 [2024-11-15 11:03:37.506013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.506055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.716 [2024-11-15 11:03:37.506068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:50.716 [2024-11-15 11:03:37.506088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.716 [2024-11-15 11:03:37.506098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.716 [2024-11-15 11:03:37.506147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.717 [2024-11-15 11:03:37.506159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:50.717 [2024-11-15 11:03:37.506169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.717 [2024-11-15 11:03:37.506179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.717 [2024-11-15 11:03:37.506231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:50.717 [2024-11-15 11:03:37.506250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:50.717 [2024-11-15 11:03:37.506261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:50.717 [2024-11-15 11:03:37.506271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:50.717 [2024-11-15 11:03:37.506449] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 508.843 ms, result 0 00:19:52.090 00:19:52.090 00:19:52.090 11:03:38 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:52.090 11:03:38 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:52.350 11:03:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:52.350 [2024-11-15 11:03:39.132909] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:52.350 [2024-11-15 11:03:39.133038] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75978 ] 00:19:52.609 [2024-11-15 11:03:39.312502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.610 [2024-11-15 11:03:39.438152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.179 [2024-11-15 11:03:39.822816] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.179 [2024-11-15 11:03:39.822899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:53.179 [2024-11-15 11:03:39.989293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:39.989346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:53.179 [2024-11-15 11:03:39.989363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:53.179 [2024-11-15 11:03:39.989373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:39.992685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:39.992723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.179 [2024-11-15 11:03:39.992737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.296 ms 00:19:53.179 [2024-11-15 11:03:39.992762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:39.992858] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:53.179 [2024-11-15 11:03:39.993872] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:53.179 [2024-11-15 11:03:39.993910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:39.993923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.179 [2024-11-15 11:03:39.993934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:19:53.179 [2024-11-15 11:03:39.993945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:39.996388] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:53.179 [2024-11-15 11:03:40.017212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:40.017272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:53.179 [2024-11-15 11:03:40.017289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.858 ms 00:19:53.179 [2024-11-15 11:03:40.017317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:40.017423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:40.017439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:53.179 [2024-11-15 11:03:40.017452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:53.179 [2024-11-15 11:03:40.017463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:40.030551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:40.030589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.179 [2024-11-15 11:03:40.030604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.063 ms 00:19:53.179 [2024-11-15 11:03:40.030615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:40.030755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:40.030771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.179 [2024-11-15 11:03:40.030783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:53.179 [2024-11-15 11:03:40.030794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:40.030828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.179 [2024-11-15 11:03:40.030845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:53.179 [2024-11-15 11:03:40.030857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:53.179 [2024-11-15 11:03:40.030867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.179 [2024-11-15 11:03:40.030896] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:53.179 [2024-11-15 11:03:40.036592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.180 [2024-11-15 11:03:40.036628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.180 [2024-11-15 11:03:40.036642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.716 ms 00:19:53.180 [2024-11-15 11:03:40.036654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.180 [2024-11-15 11:03:40.036709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.180 [2024-11-15 11:03:40.036722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:53.180 [2024-11-15 11:03:40.036735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:53.180 [2024-11-15 11:03:40.036745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.180 [2024-11-15 11:03:40.036769] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:53.180 [2024-11-15 11:03:40.036800] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:53.180 [2024-11-15 11:03:40.036842] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:53.180 [2024-11-15 11:03:40.036862] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:53.180 [2024-11-15 11:03:40.036958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:53.180 [2024-11-15 11:03:40.036971] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:53.180 [2024-11-15 11:03:40.036986] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:53.180 [2024-11-15 11:03:40.037000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037017] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037033] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:53.180 [2024-11-15 11:03:40.037046] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:53.180 [2024-11-15 11:03:40.037056] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:53.180 [2024-11-15 11:03:40.037068] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:53.180 [2024-11-15 11:03:40.037079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.180 [2024-11-15 11:03:40.037091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:53.180 [2024-11-15 11:03:40.037101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:19:53.180 [2024-11-15 11:03:40.037111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.180 [2024-11-15 11:03:40.037190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.180 [2024-11-15 11:03:40.037203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:53.180 [2024-11-15 11:03:40.037218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:53.180 [2024-11-15 11:03:40.037229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.180 [2024-11-15 11:03:40.037321] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:53.180 [2024-11-15 11:03:40.037334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:53.180 [2024-11-15 11:03:40.037346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:53.180 [2024-11-15 11:03:40.037378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:53.180 [2024-11-15 11:03:40.037408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.180 [2024-11-15 11:03:40.037426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:53.180 [2024-11-15 11:03:40.037435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:53.180 [2024-11-15 11:03:40.037447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:53.180 [2024-11-15 11:03:40.037469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:53.180 [2024-11-15 11:03:40.037479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:53.180 [2024-11-15 11:03:40.037489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:53.180 [2024-11-15 11:03:40.037508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:53.180 [2024-11-15 11:03:40.037561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:53.180 [2024-11-15 11:03:40.037590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:53.180 [2024-11-15 11:03:40.037618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:53.180 [2024-11-15 11:03:40.037647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:53.180 [2024-11-15 11:03:40.037676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.180 [2024-11-15 11:03:40.037695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:53.180 [2024-11-15 11:03:40.037704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:53.180 [2024-11-15 11:03:40.037713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:53.180 [2024-11-15 11:03:40.037722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:53.180 [2024-11-15 11:03:40.037731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:53.180 [2024-11-15 11:03:40.037740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:53.180 [2024-11-15 11:03:40.037759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:53.180 [2024-11-15 11:03:40.037768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.180 [2024-11-15 11:03:40.037777] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:53.180 [2024-11-15 11:03:40.037789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:53.180 [2024-11-15 11:03:40.037800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:53.180 [2024-11-15 11:03:40.037815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:53.181 [2024-11-15 11:03:40.037826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:53.181 [2024-11-15 11:03:40.037836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:53.181 [2024-11-15 11:03:40.037846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:53.181 [2024-11-15 11:03:40.037856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:53.181 [2024-11-15 11:03:40.037865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:53.181 [2024-11-15 11:03:40.037875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:53.181 [2024-11-15 11:03:40.037886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:53.181 [2024-11-15 11:03:40.037899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.181 [2024-11-15 11:03:40.037911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:53.181 [2024-11-15 11:03:40.037922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:53.181 [2024-11-15 11:03:40.037933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:53.181 [2024-11-15 11:03:40.037943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:53.181 [2024-11-15 11:03:40.037954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:53.181 [2024-11-15 11:03:40.037966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:53.181 [2024-11-15 11:03:40.037977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:53.181 [2024-11-15 11:03:40.037987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:53.440 [2024-11-15 11:03:40.037998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:53.440 [2024-11-15 11:03:40.038008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:53.440 [2024-11-15 11:03:40.038019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:53.440 [2024-11-15 11:03:40.038029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:53.440 [2024-11-15 11:03:40.038039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:53.440 [2024-11-15 11:03:40.038050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:53.440 [2024-11-15 11:03:40.038060] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:53.440 [2024-11-15 11:03:40.038071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:53.440 [2024-11-15 11:03:40.038083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:53.441 [2024-11-15 11:03:40.038093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:53.441 [2024-11-15 11:03:40.038103] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:53.441 [2024-11-15 11:03:40.038113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:53.441 [2024-11-15 11:03:40.038124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.038136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:53.441 [2024-11-15 11:03:40.038152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:19:53.441 [2024-11-15 11:03:40.038162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.088081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.088126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.441 [2024-11-15 11:03:40.088141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.936 ms 00:19:53.441 [2024-11-15 11:03:40.088168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.088355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.088369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:53.441 [2024-11-15 11:03:40.088382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:53.441 [2024-11-15 11:03:40.088393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.152696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.152739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.441 [2024-11-15 11:03:40.152769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.381 ms 00:19:53.441 [2024-11-15 11:03:40.152780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.152898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.152912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.441 [2024-11-15 11:03:40.152924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.441 [2024-11-15 11:03:40.152935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.153721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.153743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.441 [2024-11-15 11:03:40.153755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.764 ms 00:19:53.441 [2024-11-15 11:03:40.153773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.153912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.153927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.441 [2024-11-15 11:03:40.153939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:53.441 [2024-11-15 11:03:40.153949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.178442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.178483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.441 [2024-11-15 11:03:40.178497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.507 ms 00:19:53.441 [2024-11-15 11:03:40.178508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.198639] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:53.441 [2024-11-15 11:03:40.198680] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:53.441 [2024-11-15 11:03:40.198696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.198708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:53.441 [2024-11-15 11:03:40.198720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.076 ms 00:19:53.441 [2024-11-15 11:03:40.198731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.228518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.228574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:53.441 [2024-11-15 11:03:40.228588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.725 ms 00:19:53.441 [2024-11-15 11:03:40.228615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.246392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.246429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:53.441 [2024-11-15 11:03:40.246442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.725 ms 00:19:53.441 [2024-11-15 11:03:40.246467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.263899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.263934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:53.441 [2024-11-15 11:03:40.263946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.375 ms 00:19:53.441 [2024-11-15 11:03:40.263972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.441 [2024-11-15 11:03:40.264774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.441 [2024-11-15 11:03:40.264806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:53.441 [2024-11-15 11:03:40.264819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:19:53.441 [2024-11-15 11:03:40.264831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.362390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.362457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:53.701 [2024-11-15 11:03:40.362477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.686 ms 00:19:53.701 [2024-11-15 11:03:40.362504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.372978] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:53.701 [2024-11-15 11:03:40.398083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.398131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:53.701 [2024-11-15 11:03:40.398149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.505 ms 00:19:53.701 [2024-11-15 11:03:40.398177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.398307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.398323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:53.701 [2024-11-15 11:03:40.398336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:53.701 [2024-11-15 11:03:40.398346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.398421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.398433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:53.701 [2024-11-15 11:03:40.398444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:53.701 [2024-11-15 11:03:40.398454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.398491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.398508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:53.701 [2024-11-15 11:03:40.398518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:53.701 [2024-11-15 11:03:40.398529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.398588] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:53.701 [2024-11-15 11:03:40.398602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.398613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:53.701 [2024-11-15 11:03:40.398625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:53.701 [2024-11-15 11:03:40.398635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.435595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.435651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:53.701 [2024-11-15 11:03:40.435666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.996 ms 00:19:53.701 [2024-11-15 11:03:40.435693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.435822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.701 [2024-11-15 11:03:40.435836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:53.701 [2024-11-15 11:03:40.435849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:53.701 [2024-11-15 11:03:40.435860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.701 [2024-11-15 11:03:40.437191] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:53.701 [2024-11-15 11:03:40.441163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 448.265 ms, result 0 00:19:53.701 [2024-11-15 11:03:40.442111] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.701 [2024-11-15 11:03:40.460096] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:53.961  [2024-11-15T11:03:40.822Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-15 11:03:40.636318] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:53.961 [2024-11-15 11:03:40.649330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.649366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:53.961 [2024-11-15 11:03:40.649379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:53.961 [2024-11-15 11:03:40.649411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.649433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:53.961 [2024-11-15 11:03:40.653890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.653920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:53.961 [2024-11-15 11:03:40.653932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.450 ms 00:19:53.961 [2024-11-15 11:03:40.653958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.655962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.655996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:53.961 [2024-11-15 11:03:40.656009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.985 ms 00:19:53.961 [2024-11-15 11:03:40.656019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.659220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.659260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:53.961 [2024-11-15 11:03:40.659271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.173 ms 00:19:53.961 [2024-11-15 11:03:40.659281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.664765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.664799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:53.961 [2024-11-15 11:03:40.664811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.428 ms 00:19:53.961 [2024-11-15 11:03:40.664820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.699403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.699439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:53.961 [2024-11-15 11:03:40.699452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.559 ms 00:19:53.961 [2024-11-15 11:03:40.699461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.720810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.720853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:53.961 [2024-11-15 11:03:40.720871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.304 ms 00:19:53.961 [2024-11-15 11:03:40.720896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.721027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.721041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:53.961 [2024-11-15 11:03:40.721051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:53.961 [2024-11-15 11:03:40.721061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.757152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.757187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.961 [2024-11-15 11:03:40.757199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.119 ms 00:19:53.961 [2024-11-15 11:03:40.757208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.961 [2024-11-15 11:03:40.792012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.961 [2024-11-15 11:03:40.792047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.961 [2024-11-15 11:03:40.792059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.805 ms 00:19:53.961 [2024-11-15 11:03:40.792068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.227 [2024-11-15 11:03:40.825832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.227 [2024-11-15 11:03:40.825867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.227 [2024-11-15 11:03:40.825878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.766 ms 00:19:54.227 [2024-11-15 11:03:40.825887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.227 [2024-11-15 11:03:40.860659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.227 [2024-11-15 11:03:40.860694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.227 [2024-11-15 11:03:40.860705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.736 ms 00:19:54.227 [2024-11-15 11:03:40.860714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.227 [2024-11-15 11:03:40.860782] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.227 [2024-11-15 11:03:40.860800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.860998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.227 [2024-11-15 11:03:40.861596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.228 [2024-11-15 11:03:40.861864] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.228 [2024-11-15 11:03:40.861874] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:54.228 [2024-11-15 11:03:40.861884] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.228 [2024-11-15 11:03:40.861894] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.228 [2024-11-15 11:03:40.861904] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.228 [2024-11-15 11:03:40.861914] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.228 [2024-11-15 11:03:40.861924] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.228 [2024-11-15 11:03:40.861933] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.228 [2024-11-15 11:03:40.861943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.228 [2024-11-15 11:03:40.861952] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.228 [2024-11-15 11:03:40.861960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.228 [2024-11-15 11:03:40.861970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.228 [2024-11-15 11:03:40.861984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.228 [2024-11-15 11:03:40.861994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.190 ms 00:19:54.228 [2024-11-15 11:03:40.862004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:40.882347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.228 [2024-11-15 11:03:40.882382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.228 [2024-11-15 11:03:40.882394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.358 ms 00:19:54.228 [2024-11-15 11:03:40.882420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:40.883089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.228 [2024-11-15 11:03:40.883112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.228 [2024-11-15 11:03:40.883124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:19:54.228 [2024-11-15 11:03:40.883133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:40.940503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.228 [2024-11-15 11:03:40.940544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.228 [2024-11-15 11:03:40.940557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.228 [2024-11-15 11:03:40.940568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:40.940650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.228 [2024-11-15 11:03:40.940662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.228 [2024-11-15 11:03:40.940673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.228 [2024-11-15 11:03:40.940684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:40.940731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.228 [2024-11-15 11:03:40.940745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.228 [2024-11-15 11:03:40.940755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.228 [2024-11-15 11:03:40.940765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:40.940783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.228 [2024-11-15 11:03:40.940799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.228 [2024-11-15 11:03:40.940810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.228 [2024-11-15 11:03:40.940819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.228 [2024-11-15 11:03:41.073296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.228 [2024-11-15 11:03:41.073359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:54.228 [2024-11-15 11:03:41.073377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.228 [2024-11-15 11:03:41.073405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.176960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:54.498 [2024-11-15 11:03:41.177037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:54.498 [2024-11-15 11:03:41.177210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:54.498 [2024-11-15 11:03:41.177285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:54.498 [2024-11-15 11:03:41.177451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:54.498 [2024-11-15 11:03:41.177552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:54.498 [2024-11-15 11:03:41.177641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.498 [2024-11-15 11:03:41.177718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:54.498 [2024-11-15 11:03:41.177735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.498 [2024-11-15 11:03:41.177747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.498 [2024-11-15 11:03:41.177923] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.426 ms, result 0 00:19:55.436 00:19:55.436 00:19:55.436 11:03:42 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76019 00:19:55.436 11:03:42 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:55.436 11:03:42 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76019 00:19:55.436 11:03:42 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76019 ']' 00:19:55.436 11:03:42 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.436 11:03:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:55.436 11:03:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.436 11:03:42 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:55.436 11:03:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:55.696 [2024-11-15 11:03:42.377943] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:19:55.696 [2024-11-15 11:03:42.378065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76019 ] 00:19:55.955 [2024-11-15 11:03:42.558785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.955 [2024-11-15 11:03:42.696216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.894 11:03:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:56.894 11:03:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:56.894 11:03:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:57.153 [2024-11-15 11:03:43.919909] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.153 [2024-11-15 11:03:43.919983] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:57.413 [2024-11-15 11:03:44.112811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.413 [2024-11-15 11:03:44.112872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:57.413 [2024-11-15 11:03:44.112910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:57.413 [2024-11-15 11:03:44.112921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.413 [2024-11-15 11:03:44.117186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.413 [2024-11-15 11:03:44.117225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.414 [2024-11-15 11:03:44.117240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.249 ms 00:19:57.414 [2024-11-15 11:03:44.117251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.117359] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:57.414 [2024-11-15 11:03:44.118443] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:57.414 [2024-11-15 11:03:44.118481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.118493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.414 [2024-11-15 11:03:44.118507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:19:57.414 [2024-11-15 11:03:44.118518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.120980] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:57.414 [2024-11-15 11:03:44.141245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.141291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:57.414 [2024-11-15 11:03:44.141322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.302 ms 00:19:57.414 [2024-11-15 11:03:44.141338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.141442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.141479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:57.414 [2024-11-15 11:03:44.141491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:57.414 [2024-11-15 11:03:44.141508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.153751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.153797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.414 [2024-11-15 11:03:44.153828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.179 ms 00:19:57.414 [2024-11-15 11:03:44.153845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.154004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.154023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.414 [2024-11-15 11:03:44.154035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:57.414 [2024-11-15 11:03:44.154050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.154088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.154104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:57.414 [2024-11-15 11:03:44.154115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:57.414 [2024-11-15 11:03:44.154129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.154157] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:57.414 [2024-11-15 11:03:44.159949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.159980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.414 [2024-11-15 11:03:44.159995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.805 ms 00:19:57.414 [2024-11-15 11:03:44.160022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.160083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.160096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:57.414 [2024-11-15 11:03:44.160111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:57.414 [2024-11-15 11:03:44.160124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.160152] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:57.414 [2024-11-15 11:03:44.160178] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:57.414 [2024-11-15 11:03:44.160227] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:57.414 [2024-11-15 11:03:44.160249] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:57.414 [2024-11-15 11:03:44.160346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:57.414 [2024-11-15 11:03:44.160360] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:57.414 [2024-11-15 11:03:44.160379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:57.414 [2024-11-15 11:03:44.160396] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:57.414 [2024-11-15 11:03:44.160413] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:57.414 [2024-11-15 11:03:44.160425] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:57.414 [2024-11-15 11:03:44.160440] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:57.414 [2024-11-15 11:03:44.160450] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:57.414 [2024-11-15 11:03:44.160468] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:57.414 [2024-11-15 11:03:44.160479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.160498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:57.414 [2024-11-15 11:03:44.160509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:19:57.414 [2024-11-15 11:03:44.160537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.160621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.414 [2024-11-15 11:03:44.160638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:57.414 [2024-11-15 11:03:44.160650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:57.414 [2024-11-15 11:03:44.160667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.414 [2024-11-15 11:03:44.160772] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:57.414 [2024-11-15 11:03:44.160793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:57.414 [2024-11-15 11:03:44.160804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.414 [2024-11-15 11:03:44.160821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.160832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:57.414 [2024-11-15 11:03:44.160847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.160857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:57.414 [2024-11-15 11:03:44.160881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:57.414 [2024-11-15 11:03:44.160890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:57.414 [2024-11-15 11:03:44.160906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.414 [2024-11-15 11:03:44.160916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:57.414 [2024-11-15 11:03:44.160931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:57.414 [2024-11-15 11:03:44.160941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:57.414 [2024-11-15 11:03:44.160957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:57.414 [2024-11-15 11:03:44.160968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:57.414 [2024-11-15 11:03:44.160984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.160994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:57.414 [2024-11-15 11:03:44.161009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:57.414 [2024-11-15 11:03:44.161057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:57.414 [2024-11-15 11:03:44.161102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:57.414 [2024-11-15 11:03:44.161137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:57.414 [2024-11-15 11:03:44.161176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:57.414 [2024-11-15 11:03:44.161211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.414 [2024-11-15 11:03:44.161237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:57.414 [2024-11-15 11:03:44.161251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:57.414 [2024-11-15 11:03:44.161260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:57.414 [2024-11-15 11:03:44.161275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:57.414 [2024-11-15 11:03:44.161285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:57.414 [2024-11-15 11:03:44.161305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:57.414 [2024-11-15 11:03:44.161330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:57.414 [2024-11-15 11:03:44.161340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161354] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:57.414 [2024-11-15 11:03:44.161366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:57.414 [2024-11-15 11:03:44.161382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.414 [2024-11-15 11:03:44.161406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.414 [2024-11-15 11:03:44.161415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.414 [2024-11-15 11:03:44.161428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.414 [2024-11-15 11:03:44.161438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.414 [2024-11-15 11:03:44.161450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.414 [2024-11-15 11:03:44.161460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.414 [2024-11-15 11:03:44.161474] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.414 [2024-11-15 11:03:44.161487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.414 [2024-11-15 11:03:44.161507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:57.414 [2024-11-15 11:03:44.161519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:57.414 [2024-11-15 11:03:44.161553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:57.414 [2024-11-15 11:03:44.161564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:57.415 [2024-11-15 11:03:44.161578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:57.415 [2024-11-15 11:03:44.161589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:57.415 [2024-11-15 11:03:44.161602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:57.415 [2024-11-15 11:03:44.161612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:57.415 [2024-11-15 11:03:44.161625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:57.415 [2024-11-15 11:03:44.161635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:57.415 [2024-11-15 11:03:44.161648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:57.415 [2024-11-15 11:03:44.161659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:57.415 [2024-11-15 11:03:44.161671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:57.415 [2024-11-15 11:03:44.161682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:57.415 [2024-11-15 11:03:44.161695] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.415 [2024-11-15 11:03:44.161707] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.415 [2024-11-15 11:03:44.161724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.415 [2024-11-15 11:03:44.161735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.415 [2024-11-15 11:03:44.161748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.415 [2024-11-15 11:03:44.161758] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.415 [2024-11-15 11:03:44.161772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.161783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.415 [2024-11-15 11:03:44.161797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.052 ms 00:19:57.415 [2024-11-15 11:03:44.161808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.415 [2024-11-15 11:03:44.213695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.213734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.415 [2024-11-15 11:03:44.213754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.896 ms 00:19:57.415 [2024-11-15 11:03:44.213766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.415 [2024-11-15 11:03:44.213930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.213945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.415 [2024-11-15 11:03:44.213962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:57.415 [2024-11-15 11:03:44.213973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.415 [2024-11-15 11:03:44.270513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.270565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.415 [2024-11-15 11:03:44.270586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.597 ms 00:19:57.415 [2024-11-15 11:03:44.270597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.415 [2024-11-15 11:03:44.270675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.270688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.415 [2024-11-15 11:03:44.270706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:57.415 [2024-11-15 11:03:44.270717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.415 [2024-11-15 11:03:44.271470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.271493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.415 [2024-11-15 11:03:44.271518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:19:57.415 [2024-11-15 11:03:44.271541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.415 [2024-11-15 11:03:44.271684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.415 [2024-11-15 11:03:44.271698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.415 [2024-11-15 11:03:44.271716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:57.415 [2024-11-15 11:03:44.271727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.299320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.299356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.675 [2024-11-15 11:03:44.299391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.602 ms 00:19:57.675 [2024-11-15 11:03:44.299403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.319591] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:57.675 [2024-11-15 11:03:44.319628] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.675 [2024-11-15 11:03:44.319648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.319676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.675 [2024-11-15 11:03:44.319691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.152 ms 00:19:57.675 [2024-11-15 11:03:44.319702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.348876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.348916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.675 [2024-11-15 11:03:44.348936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.109 ms 00:19:57.675 [2024-11-15 11:03:44.348963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.366572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.366607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.675 [2024-11-15 11:03:44.366646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.540 ms 00:19:57.675 [2024-11-15 11:03:44.366656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.383717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.383763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.675 [2024-11-15 11:03:44.383783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.002 ms 00:19:57.675 [2024-11-15 11:03:44.383809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.384640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.384670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.675 [2024-11-15 11:03:44.384689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:19:57.675 [2024-11-15 11:03:44.384700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.489607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.489684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.675 [2024-11-15 11:03:44.489725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.043 ms 00:19:57.675 [2024-11-15 11:03:44.489736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.500192] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:57.675 [2024-11-15 11:03:44.524484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.524549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.675 [2024-11-15 11:03:44.524587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.712 ms 00:19:57.675 [2024-11-15 11:03:44.524601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.524737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.524755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.675 [2024-11-15 11:03:44.524767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:57.675 [2024-11-15 11:03:44.524782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.524850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.524866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.675 [2024-11-15 11:03:44.524877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:57.675 [2024-11-15 11:03:44.524891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.524924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.675 [2024-11-15 11:03:44.524940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.675 [2024-11-15 11:03:44.524950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:57.675 [2024-11-15 11:03:44.524966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.675 [2024-11-15 11:03:44.525010] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.675 [2024-11-15 11:03:44.525031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.676 [2024-11-15 11:03:44.525042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.676 [2024-11-15 11:03:44.525061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:57.676 [2024-11-15 11:03:44.525071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.935 [2024-11-15 11:03:44.561506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.935 [2024-11-15 11:03:44.561556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.935 [2024-11-15 11:03:44.561575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.457 ms 00:19:57.935 [2024-11-15 11:03:44.561602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.935 [2024-11-15 11:03:44.561725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.935 [2024-11-15 11:03:44.561739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.935 [2024-11-15 11:03:44.561755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:57.935 [2024-11-15 11:03:44.561770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.935 [2024-11-15 11:03:44.563062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:57.935 [2024-11-15 11:03:44.567300] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 450.632 ms, result 0 00:19:57.935 [2024-11-15 11:03:44.568607] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.935 Some configs were skipped because the RPC state that can call them passed over. 00:19:57.935 11:03:44 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:58.193 [2024-11-15 11:03:44.816100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.193 [2024-11-15 11:03:44.816153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:58.193 [2024-11-15 11:03:44.816168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.683 ms 00:19:58.193 [2024-11-15 11:03:44.816182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.193 [2024-11-15 11:03:44.816217] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.801 ms, result 0 00:19:58.193 true 00:19:58.193 11:03:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:58.193 [2024-11-15 11:03:45.027635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.193 [2024-11-15 11:03:45.027670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:58.193 [2024-11-15 11:03:45.027686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.346 ms 00:19:58.193 [2024-11-15 11:03:45.027697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.193 [2024-11-15 11:03:45.027737] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.446 ms, result 0 00:19:58.193 true 00:19:58.452 11:03:45 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76019 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76019 ']' 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76019 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76019 00:19:58.452 killing process with pid 76019 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76019' 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76019 00:19:58.452 11:03:45 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76019 00:19:59.831 [2024-11-15 11:03:46.289065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.289169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:59.831 [2024-11-15 11:03:46.289202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.831 [2024-11-15 11:03:46.289216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.831 [2024-11-15 11:03:46.289244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:59.831 [2024-11-15 11:03:46.293797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.293836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:59.831 [2024-11-15 11:03:46.293854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.534 ms 00:19:59.831 [2024-11-15 11:03:46.293865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.831 [2024-11-15 11:03:46.294152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.294173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:59.831 [2024-11-15 11:03:46.294187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:19:59.831 [2024-11-15 11:03:46.294198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.831 [2024-11-15 11:03:46.297605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.297644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:59.831 [2024-11-15 11:03:46.297662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.388 ms 00:19:59.831 [2024-11-15 11:03:46.297673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.831 [2024-11-15 11:03:46.303091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.303128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:59.831 [2024-11-15 11:03:46.303143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.382 ms 00:19:59.831 [2024-11-15 11:03:46.303168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.831 [2024-11-15 11:03:46.318126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.318166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:59.831 [2024-11-15 11:03:46.318186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.908 ms 00:19:59.831 [2024-11-15 11:03:46.318222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.831 [2024-11-15 11:03:46.329977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.831 [2024-11-15 11:03:46.330024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:59.831 [2024-11-15 11:03:46.330062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.694 ms 00:19:59.831 [2024-11-15 11:03:46.330074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.832 [2024-11-15 11:03:46.330238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.832 [2024-11-15 11:03:46.330254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:59.832 [2024-11-15 11:03:46.330268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:59.832 [2024-11-15 11:03:46.330279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.832 [2024-11-15 11:03:46.345480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.832 [2024-11-15 11:03:46.345515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:59.832 [2024-11-15 11:03:46.345575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.198 ms 00:19:59.832 [2024-11-15 11:03:46.345586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.832 [2024-11-15 11:03:46.360051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.832 [2024-11-15 11:03:46.360087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:59.832 [2024-11-15 11:03:46.360107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.431 ms 00:19:59.832 [2024-11-15 11:03:46.360132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.832 [2024-11-15 11:03:46.373902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.832 [2024-11-15 11:03:46.373938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:59.832 [2024-11-15 11:03:46.373958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.736 ms 00:19:59.832 [2024-11-15 11:03:46.373984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.832 [2024-11-15 11:03:46.388867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.832 [2024-11-15 11:03:46.388912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:59.832 [2024-11-15 11:03:46.388929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.820 ms 00:19:59.832 [2024-11-15 11:03:46.388955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.832 [2024-11-15 11:03:46.389011] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:59.832 [2024-11-15 11:03:46.389033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:59.832 [2024-11-15 11:03:46.389989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:59.833 [2024-11-15 11:03:46.390347] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:59.833 [2024-11-15 11:03:46.390388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:19:59.833 [2024-11-15 11:03:46.390428] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:59.833 [2024-11-15 11:03:46.390453] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:59.833 [2024-11-15 11:03:46.390463] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:59.833 [2024-11-15 11:03:46.390480] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:59.833 [2024-11-15 11:03:46.390491] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:59.833 [2024-11-15 11:03:46.390508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:59.833 [2024-11-15 11:03:46.390519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:59.833 [2024-11-15 11:03:46.390545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:59.833 [2024-11-15 11:03:46.390554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:59.833 [2024-11-15 11:03:46.390570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.833 [2024-11-15 11:03:46.390581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:59.833 [2024-11-15 11:03:46.390598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.563 ms 00:19:59.833 [2024-11-15 11:03:46.390609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.411813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.833 [2024-11-15 11:03:46.411852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:59.833 [2024-11-15 11:03:46.411894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.194 ms 00:19:59.833 [2024-11-15 11:03:46.411905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.412582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.833 [2024-11-15 11:03:46.412608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:59.833 [2024-11-15 11:03:46.412627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:19:59.833 [2024-11-15 11:03:46.412646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.483215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.833 [2024-11-15 11:03:46.483259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.833 [2024-11-15 11:03:46.483279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.833 [2024-11-15 11:03:46.483307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.483423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.833 [2024-11-15 11:03:46.483437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.833 [2024-11-15 11:03:46.483454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.833 [2024-11-15 11:03:46.483472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.483564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.833 [2024-11-15 11:03:46.483579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.833 [2024-11-15 11:03:46.483601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.833 [2024-11-15 11:03:46.483612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.483640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.833 [2024-11-15 11:03:46.483652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.833 [2024-11-15 11:03:46.483668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.833 [2024-11-15 11:03:46.483679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.833 [2024-11-15 11:03:46.610861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.833 [2024-11-15 11:03:46.610943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.833 [2024-11-15 11:03:46.610984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.833 [2024-11-15 11:03:46.610995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.715427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.715518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.092 [2024-11-15 11:03:46.715570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.092 [2024-11-15 11:03:46.715587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.715754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.715768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.092 [2024-11-15 11:03:46.715787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.092 [2024-11-15 11:03:46.715798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.715836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.715847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.092 [2024-11-15 11:03:46.715861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.092 [2024-11-15 11:03:46.715872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.716011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.716025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.092 [2024-11-15 11:03:46.716039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.092 [2024-11-15 11:03:46.716049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.716096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.716109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:00.092 [2024-11-15 11:03:46.716124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.092 [2024-11-15 11:03:46.716134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.716188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.716205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.092 [2024-11-15 11:03:46.716223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.092 [2024-11-15 11:03:46.716234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.092 [2024-11-15 11:03:46.716291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:00.092 [2024-11-15 11:03:46.716303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.093 [2024-11-15 11:03:46.716317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:00.093 [2024-11-15 11:03:46.716328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.093 [2024-11-15 11:03:46.716539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 428.098 ms, result 0 00:20:01.030 11:03:47 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:01.030 [2024-11-15 11:03:47.872726] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:20:01.030 [2024-11-15 11:03:47.872853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76085 ] 00:20:01.290 [2024-11-15 11:03:48.054469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.549 [2024-11-15 11:03:48.179749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.808 [2024-11-15 11:03:48.572650] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:01.808 [2024-11-15 11:03:48.572730] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:02.068 [2024-11-15 11:03:48.738260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.738324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:02.068 [2024-11-15 11:03:48.738341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:02.068 [2024-11-15 11:03:48.738368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-15 11:03:48.741803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.741842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.068 [2024-11-15 11:03:48.741855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.420 ms 00:20:02.068 [2024-11-15 11:03:48.741881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-15 11:03:48.741985] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:02.068 [2024-11-15 11:03:48.742931] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:02.068 [2024-11-15 11:03:48.742968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.742980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.068 [2024-11-15 11:03:48.742992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:20:02.068 [2024-11-15 11:03:48.743003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-15 11:03:48.745446] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:02.068 [2024-11-15 11:03:48.763917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.763961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:02.068 [2024-11-15 11:03:48.763977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.502 ms 00:20:02.068 [2024-11-15 11:03:48.764003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-15 11:03:48.764104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.764119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:02.068 [2024-11-15 11:03:48.764131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:02.068 [2024-11-15 11:03:48.764141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-15 11:03:48.775989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.776020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.068 [2024-11-15 11:03:48.776032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.824 ms 00:20:02.068 [2024-11-15 11:03:48.776058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.068 [2024-11-15 11:03:48.776180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.068 [2024-11-15 11:03:48.776196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.068 [2024-11-15 11:03:48.776207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:02.069 [2024-11-15 11:03:48.776219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.069 [2024-11-15 11:03:48.776248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.069 [2024-11-15 11:03:48.776263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:02.069 [2024-11-15 11:03:48.776274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.069 [2024-11-15 11:03:48.776284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.069 [2024-11-15 11:03:48.776309] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:02.069 [2024-11-15 11:03:48.781910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.069 [2024-11-15 11:03:48.781945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.069 [2024-11-15 11:03:48.781958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.616 ms 00:20:02.069 [2024-11-15 11:03:48.781984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.069 [2024-11-15 11:03:48.782038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.069 [2024-11-15 11:03:48.782050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:02.069 [2024-11-15 11:03:48.782063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:02.069 [2024-11-15 11:03:48.782073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.069 [2024-11-15 11:03:48.782095] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:02.069 [2024-11-15 11:03:48.782124] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:02.069 [2024-11-15 11:03:48.782162] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:02.069 [2024-11-15 11:03:48.782184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:02.069 [2024-11-15 11:03:48.782279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:02.069 [2024-11-15 11:03:48.782293] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:02.069 [2024-11-15 11:03:48.782307] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:02.069 [2024-11-15 11:03:48.782320] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782337] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782350] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:02.069 [2024-11-15 11:03:48.782362] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:02.069 [2024-11-15 11:03:48.782372] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:02.069 [2024-11-15 11:03:48.782383] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:02.069 [2024-11-15 11:03:48.782394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.069 [2024-11-15 11:03:48.782405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:02.069 [2024-11-15 11:03:48.782416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:20:02.069 [2024-11-15 11:03:48.782426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.069 [2024-11-15 11:03:48.782501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.069 [2024-11-15 11:03:48.782513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:02.069 [2024-11-15 11:03:48.782528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:02.069 [2024-11-15 11:03:48.782537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.069 [2024-11-15 11:03:48.782641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:02.069 [2024-11-15 11:03:48.782659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:02.069 [2024-11-15 11:03:48.782672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:02.069 [2024-11-15 11:03:48.782703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:02.069 [2024-11-15 11:03:48.782733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.069 [2024-11-15 11:03:48.782755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:02.069 [2024-11-15 11:03:48.782765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:02.069 [2024-11-15 11:03:48.782774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:02.069 [2024-11-15 11:03:48.782796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:02.069 [2024-11-15 11:03:48.782806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:02.069 [2024-11-15 11:03:48.782816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:02.069 [2024-11-15 11:03:48.782835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:02.069 [2024-11-15 11:03:48.782864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:02.069 [2024-11-15 11:03:48.782893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:02.069 [2024-11-15 11:03:48.782922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:02.069 [2024-11-15 11:03:48.782948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:02.069 [2024-11-15 11:03:48.782966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:02.069 [2024-11-15 11:03:48.782976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:02.069 [2024-11-15 11:03:48.782984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.069 [2024-11-15 11:03:48.782993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:02.069 [2024-11-15 11:03:48.783002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:02.069 [2024-11-15 11:03:48.783011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:02.069 [2024-11-15 11:03:48.783020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:02.069 [2024-11-15 11:03:48.783028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:02.069 [2024-11-15 11:03:48.783037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.783046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:02.069 [2024-11-15 11:03:48.783055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:02.069 [2024-11-15 11:03:48.783066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.783076] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:02.069 [2024-11-15 11:03:48.783087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:02.069 [2024-11-15 11:03:48.783098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:02.069 [2024-11-15 11:03:48.783112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:02.069 [2024-11-15 11:03:48.783123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:02.069 [2024-11-15 11:03:48.783133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:02.069 [2024-11-15 11:03:48.783142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:02.069 [2024-11-15 11:03:48.783151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:02.069 [2024-11-15 11:03:48.783160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:02.069 [2024-11-15 11:03:48.783169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:02.069 [2024-11-15 11:03:48.783181] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:02.069 [2024-11-15 11:03:48.783193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.069 [2024-11-15 11:03:48.783204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:02.069 [2024-11-15 11:03:48.783214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:02.069 [2024-11-15 11:03:48.783224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:02.069 [2024-11-15 11:03:48.783234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:02.069 [2024-11-15 11:03:48.783244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:02.069 [2024-11-15 11:03:48.783254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:02.069 [2024-11-15 11:03:48.783264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:02.069 [2024-11-15 11:03:48.783274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:02.069 [2024-11-15 11:03:48.783285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:02.069 [2024-11-15 11:03:48.783295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:02.069 [2024-11-15 11:03:48.783306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:02.070 [2024-11-15 11:03:48.783316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:02.070 [2024-11-15 11:03:48.783326] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:02.070 [2024-11-15 11:03:48.783336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:02.070 [2024-11-15 11:03:48.783346] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:02.070 [2024-11-15 11:03:48.783357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:02.070 [2024-11-15 11:03:48.783369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:02.070 [2024-11-15 11:03:48.783379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:02.070 [2024-11-15 11:03:48.783390] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:02.070 [2024-11-15 11:03:48.783401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:02.070 [2024-11-15 11:03:48.783412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.783422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:02.070 [2024-11-15 11:03:48.783437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:20:02.070 [2024-11-15 11:03:48.783448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.829606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.829647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.070 [2024-11-15 11:03:48.829661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.179 ms 00:20:02.070 [2024-11-15 11:03:48.829672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.829845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.829860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:02.070 [2024-11-15 11:03:48.829872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:02.070 [2024-11-15 11:03:48.829883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.891920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.891959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.070 [2024-11-15 11:03:48.891977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.112 ms 00:20:02.070 [2024-11-15 11:03:48.892004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.892080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.892093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.070 [2024-11-15 11:03:48.892106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:02.070 [2024-11-15 11:03:48.892117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.892847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.892868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.070 [2024-11-15 11:03:48.892879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:20:02.070 [2024-11-15 11:03:48.892896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.893031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.893045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.070 [2024-11-15 11:03:48.893058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:02.070 [2024-11-15 11:03:48.893069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.070 [2024-11-15 11:03:48.916135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.070 [2024-11-15 11:03:48.916170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.070 [2024-11-15 11:03:48.916185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.079 ms 00:20:02.070 [2024-11-15 11:03:48.916197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.329 [2024-11-15 11:03:48.935501] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:02.329 [2024-11-15 11:03:48.935548] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:02.329 [2024-11-15 11:03:48.935565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.329 [2024-11-15 11:03:48.935592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:02.329 [2024-11-15 11:03:48.935605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.279 ms 00:20:02.329 [2024-11-15 11:03:48.935617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.329 [2024-11-15 11:03:48.964594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:48.964658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:02.330 [2024-11-15 11:03:48.964674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.922 ms 00:20:02.330 [2024-11-15 11:03:48.964701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:48.982272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:48.982309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:02.330 [2024-11-15 11:03:48.982321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.518 ms 00:20:02.330 [2024-11-15 11:03:48.982349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:48.999051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:48.999089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:02.330 [2024-11-15 11:03:48.999103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.651 ms 00:20:02.330 [2024-11-15 11:03:48.999130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:48.999907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:48.999938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:02.330 [2024-11-15 11:03:48.999951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.669 ms 00:20:02.330 [2024-11-15 11:03:48.999961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.092262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.092323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:02.330 [2024-11-15 11:03:49.092341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.419 ms 00:20:02.330 [2024-11-15 11:03:49.092369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.102608] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:02.330 [2024-11-15 11:03:49.126895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.126947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:02.330 [2024-11-15 11:03:49.126965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.470 ms 00:20:02.330 [2024-11-15 11:03:49.126993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.127110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.127125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:02.330 [2024-11-15 11:03:49.127138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.330 [2024-11-15 11:03:49.127151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.127218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.127231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:02.330 [2024-11-15 11:03:49.127243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:02.330 [2024-11-15 11:03:49.127253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.127289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.127304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:02.330 [2024-11-15 11:03:49.127315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:02.330 [2024-11-15 11:03:49.127325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.127367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:02.330 [2024-11-15 11:03:49.127380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.127392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:02.330 [2024-11-15 11:03:49.127402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:02.330 [2024-11-15 11:03:49.127412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.163045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.163087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:02.330 [2024-11-15 11:03:49.163103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.664 ms 00:20:02.330 [2024-11-15 11:03:49.163130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.163256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.330 [2024-11-15 11:03:49.163271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:02.330 [2024-11-15 11:03:49.163283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:02.330 [2024-11-15 11:03:49.163293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.330 [2024-11-15 11:03:49.164588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.330 [2024-11-15 11:03:49.168698] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 426.635 ms, result 0 00:20:02.330 [2024-11-15 11:03:49.169697] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.330 [2024-11-15 11:03:49.187643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:03.706  [2024-11-15T11:03:51.541Z] Copying: 28/256 [MB] (28 MBps) [2024-11-15T11:03:52.479Z] Copying: 53/256 [MB] (25 MBps) [2024-11-15T11:03:53.417Z] Copying: 77/256 [MB] (24 MBps) [2024-11-15T11:03:54.354Z] Copying: 102/256 [MB] (24 MBps) [2024-11-15T11:03:55.290Z] Copying: 127/256 [MB] (25 MBps) [2024-11-15T11:03:56.668Z] Copying: 153/256 [MB] (25 MBps) [2024-11-15T11:03:57.606Z] Copying: 178/256 [MB] (25 MBps) [2024-11-15T11:03:58.542Z] Copying: 204/256 [MB] (26 MBps) [2024-11-15T11:03:59.672Z] Copying: 230/256 [MB] (26 MBps) [2024-11-15T11:03:59.672Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-15 11:03:59.246083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:12.811 [2024-11-15 11:03:59.261208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.261275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:12.811 [2024-11-15 11:03:59.261292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.811 [2024-11-15 11:03:59.261312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.261340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:12.811 [2024-11-15 11:03:59.265831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.265863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:12.811 [2024-11-15 11:03:59.265876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:20:12.811 [2024-11-15 11:03:59.265886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.266122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.266136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:12.811 [2024-11-15 11:03:59.266148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:20:12.811 [2024-11-15 11:03:59.266158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.269162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.269190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:12.811 [2024-11-15 11:03:59.269202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.993 ms 00:20:12.811 [2024-11-15 11:03:59.269229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.275108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.275143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:12.811 [2024-11-15 11:03:59.275157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.867 ms 00:20:12.811 [2024-11-15 11:03:59.275167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.315276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.315318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:12.811 [2024-11-15 11:03:59.315333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.708 ms 00:20:12.811 [2024-11-15 11:03:59.315342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.336556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.336600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:12.811 [2024-11-15 11:03:59.336631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.185 ms 00:20:12.811 [2024-11-15 11:03:59.336645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.336781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.336795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:12.811 [2024-11-15 11:03:59.336808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:20:12.811 [2024-11-15 11:03:59.336818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.373338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.373376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:12.811 [2024-11-15 11:03:59.373390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.549 ms 00:20:12.811 [2024-11-15 11:03:59.373400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.409275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.409310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:12.811 [2024-11-15 11:03:59.409338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.871 ms 00:20:12.811 [2024-11-15 11:03:59.409348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.445757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.445794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:12.811 [2024-11-15 11:03:59.445808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.392 ms 00:20:12.811 [2024-11-15 11:03:59.445818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.481773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.811 [2024-11-15 11:03:59.481811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:12.811 [2024-11-15 11:03:59.481824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.926 ms 00:20:12.811 [2024-11-15 11:03:59.481834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.811 [2024-11-15 11:03:59.481892] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:12.811 [2024-11-15 11:03:59.481911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.481991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:12.811 [2024-11-15 11:03:59.482086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.482989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:12.812 [2024-11-15 11:03:59.483008] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:12.812 [2024-11-15 11:03:59.483020] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7f41f37e-4d8b-4639-af11-0cd684c222f0 00:20:12.812 [2024-11-15 11:03:59.483030] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:12.812 [2024-11-15 11:03:59.483041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:12.812 [2024-11-15 11:03:59.483051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:12.812 [2024-11-15 11:03:59.483061] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:12.812 [2024-11-15 11:03:59.483071] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:12.812 [2024-11-15 11:03:59.483082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:12.813 [2024-11-15 11:03:59.483092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:12.813 [2024-11-15 11:03:59.483101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:12.813 [2024-11-15 11:03:59.483110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:12.813 [2024-11-15 11:03:59.483120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.813 [2024-11-15 11:03:59.483134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:12.813 [2024-11-15 11:03:59.483145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.231 ms 00:20:12.813 [2024-11-15 11:03:59.483156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.813 [2024-11-15 11:03:59.503115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.813 [2024-11-15 11:03:59.503153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:12.813 [2024-11-15 11:03:59.503166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.969 ms 00:20:12.813 [2024-11-15 11:03:59.503177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.813 [2024-11-15 11:03:59.503811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.813 [2024-11-15 11:03:59.503833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:12.813 [2024-11-15 11:03:59.503845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:20:12.813 [2024-11-15 11:03:59.503856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.813 [2024-11-15 11:03:59.559487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.813 [2024-11-15 11:03:59.559543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.813 [2024-11-15 11:03:59.559559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.813 [2024-11-15 11:03:59.559570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.813 [2024-11-15 11:03:59.559676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.813 [2024-11-15 11:03:59.559688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.813 [2024-11-15 11:03:59.559699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.813 [2024-11-15 11:03:59.559709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.813 [2024-11-15 11:03:59.559767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.813 [2024-11-15 11:03:59.559781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.813 [2024-11-15 11:03:59.559792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.813 [2024-11-15 11:03:59.559801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.813 [2024-11-15 11:03:59.559822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:12.813 [2024-11-15 11:03:59.559837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.813 [2024-11-15 11:03:59.559847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:12.813 [2024-11-15 11:03:59.559857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.683687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.683771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.072 [2024-11-15 11:03:59.683789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.683800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.785997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.072 [2024-11-15 11:03:59.786074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.072 [2024-11-15 11:03:59.786222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.072 [2024-11-15 11:03:59.786287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.072 [2024-11-15 11:03:59.786429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:13.072 [2024-11-15 11:03:59.786498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.072 [2024-11-15 11:03:59.786590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:13.072 [2024-11-15 11:03:59.786654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.072 [2024-11-15 11:03:59.786669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:13.072 [2024-11-15 11:03:59.786678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.072 [2024-11-15 11:03:59.786820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 526.462 ms, result 0 00:20:14.008 00:20:14.008 00:20:14.008 11:04:00 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:14.579 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:14.579 11:04:01 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:14.579 11:04:01 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:14.579 11:04:01 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:14.580 11:04:01 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:14.580 11:04:01 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:14.580 11:04:01 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:14.580 Process with pid 76019 is not found 00:20:14.580 11:04:01 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76019 00:20:14.580 11:04:01 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76019 ']' 00:20:14.580 11:04:01 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76019 00:20:14.580 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76019) - No such process 00:20:14.580 11:04:01 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76019 is not found' 00:20:14.580 00:20:14.580 real 1m12.461s 00:20:14.580 user 1m38.660s 00:20:14.580 sys 0m7.696s 00:20:14.580 11:04:01 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.580 11:04:01 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:14.580 ************************************ 00:20:14.580 END TEST ftl_trim 00:20:14.580 ************************************ 00:20:14.840 11:04:01 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:14.840 11:04:01 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:14.840 11:04:01 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.840 11:04:01 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:14.840 ************************************ 00:20:14.840 START TEST ftl_restore 00:20:14.840 ************************************ 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:14.840 * Looking for test storage... 00:20:14.840 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:14.840 11:04:01 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:14.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.840 --rc genhtml_branch_coverage=1 00:20:14.840 --rc genhtml_function_coverage=1 00:20:14.840 --rc genhtml_legend=1 00:20:14.840 --rc geninfo_all_blocks=1 00:20:14.840 --rc geninfo_unexecuted_blocks=1 00:20:14.840 00:20:14.840 ' 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:14.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.840 --rc genhtml_branch_coverage=1 00:20:14.840 --rc genhtml_function_coverage=1 00:20:14.840 --rc genhtml_legend=1 00:20:14.840 --rc geninfo_all_blocks=1 00:20:14.840 --rc geninfo_unexecuted_blocks=1 00:20:14.840 00:20:14.840 ' 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:14.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.840 --rc genhtml_branch_coverage=1 00:20:14.840 --rc genhtml_function_coverage=1 00:20:14.840 --rc genhtml_legend=1 00:20:14.840 --rc geninfo_all_blocks=1 00:20:14.840 --rc geninfo_unexecuted_blocks=1 00:20:14.840 00:20:14.840 ' 00:20:14.840 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:14.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:14.840 --rc genhtml_branch_coverage=1 00:20:14.840 --rc genhtml_function_coverage=1 00:20:14.840 --rc genhtml_legend=1 00:20:14.840 --rc geninfo_all_blocks=1 00:20:14.840 --rc geninfo_unexecuted_blocks=1 00:20:14.840 00:20:14.840 ' 00:20:14.840 11:04:01 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:15.100 11:04:01 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:15.100 11:04:01 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:15.100 11:04:01 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:15.100 11:04:01 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:15.100 11:04:01 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:15.100 11:04:01 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.hQLeCqqC92 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76291 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:15.101 11:04:01 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76291 00:20:15.101 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 76291 ']' 00:20:15.101 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.101 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.101 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.101 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.101 11:04:01 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:15.101 [2024-11-15 11:04:01.882654] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:20:15.101 [2024-11-15 11:04:01.883503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76291 ] 00:20:15.362 [2024-11-15 11:04:02.079705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.362 [2024-11-15 11:04:02.196657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.298 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.298 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:16.298 11:04:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:16.298 11:04:03 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:16.298 11:04:03 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:16.298 11:04:03 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:16.298 11:04:03 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:16.299 11:04:03 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:16.558 11:04:03 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:16.558 11:04:03 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:16.558 11:04:03 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:16.558 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:16.558 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:16.558 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:16.558 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:16.558 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:16.817 { 00:20:16.817 "name": "nvme0n1", 00:20:16.817 "aliases": [ 00:20:16.817 "c4a998f1-6925-4456-b23b-1f409935995d" 00:20:16.817 ], 00:20:16.817 "product_name": "NVMe disk", 00:20:16.817 "block_size": 4096, 00:20:16.817 "num_blocks": 1310720, 00:20:16.817 "uuid": "c4a998f1-6925-4456-b23b-1f409935995d", 00:20:16.817 "numa_id": -1, 00:20:16.817 "assigned_rate_limits": { 00:20:16.817 "rw_ios_per_sec": 0, 00:20:16.817 "rw_mbytes_per_sec": 0, 00:20:16.817 "r_mbytes_per_sec": 0, 00:20:16.817 "w_mbytes_per_sec": 0 00:20:16.817 }, 00:20:16.817 "claimed": true, 00:20:16.817 "claim_type": "read_many_write_one", 00:20:16.817 "zoned": false, 00:20:16.817 "supported_io_types": { 00:20:16.817 "read": true, 00:20:16.817 "write": true, 00:20:16.817 "unmap": true, 00:20:16.817 "flush": true, 00:20:16.817 "reset": true, 00:20:16.817 "nvme_admin": true, 00:20:16.817 "nvme_io": true, 00:20:16.817 "nvme_io_md": false, 00:20:16.817 "write_zeroes": true, 00:20:16.817 "zcopy": false, 00:20:16.817 "get_zone_info": false, 00:20:16.817 "zone_management": false, 00:20:16.817 "zone_append": false, 00:20:16.817 "compare": true, 00:20:16.817 "compare_and_write": false, 00:20:16.817 "abort": true, 00:20:16.817 "seek_hole": false, 00:20:16.817 "seek_data": false, 00:20:16.817 "copy": true, 00:20:16.817 "nvme_iov_md": false 00:20:16.817 }, 00:20:16.817 "driver_specific": { 00:20:16.817 "nvme": [ 00:20:16.817 { 00:20:16.817 "pci_address": "0000:00:11.0", 00:20:16.817 "trid": { 00:20:16.817 "trtype": "PCIe", 00:20:16.817 "traddr": "0000:00:11.0" 00:20:16.817 }, 00:20:16.817 "ctrlr_data": { 00:20:16.817 "cntlid": 0, 00:20:16.817 "vendor_id": "0x1b36", 00:20:16.817 "model_number": "QEMU NVMe Ctrl", 00:20:16.817 "serial_number": "12341", 00:20:16.817 "firmware_revision": "8.0.0", 00:20:16.817 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:16.817 "oacs": { 00:20:16.817 "security": 0, 00:20:16.817 "format": 1, 00:20:16.817 "firmware": 0, 00:20:16.817 "ns_manage": 1 00:20:16.817 }, 00:20:16.817 "multi_ctrlr": false, 00:20:16.817 "ana_reporting": false 00:20:16.817 }, 00:20:16.817 "vs": { 00:20:16.817 "nvme_version": "1.4" 00:20:16.817 }, 00:20:16.817 "ns_data": { 00:20:16.817 "id": 1, 00:20:16.817 "can_share": false 00:20:16.817 } 00:20:16.817 } 00:20:16.817 ], 00:20:16.817 "mp_policy": "active_passive" 00:20:16.817 } 00:20:16.817 } 00:20:16.817 ]' 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:16.817 11:04:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:16.817 11:04:03 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:16.817 11:04:03 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:16.817 11:04:03 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:16.817 11:04:03 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:16.817 11:04:03 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:17.075 11:04:03 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d7191a5d-b430-4199-960f-58a227da16ce 00:20:17.076 11:04:03 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:17.076 11:04:03 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d7191a5d-b430-4199-960f-58a227da16ce 00:20:17.334 11:04:04 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:17.594 11:04:04 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=c9e7e241-fcf9-44a9-aac2-c0e43e734204 00:20:17.594 11:04:04 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c9e7e241-fcf9-44a9-aac2-c0e43e734204 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=7064403c-e783-4d35-9330-8686abf72103 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7064403c-e783-4d35-9330-8686abf72103 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=7064403c-e783-4d35-9330-8686abf72103 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:17.853 11:04:04 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 7064403c-e783-4d35-9330-8686abf72103 00:20:17.853 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7064403c-e783-4d35-9330-8686abf72103 00:20:17.853 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:17.853 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:17.853 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:17.853 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7064403c-e783-4d35-9330-8686abf72103 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:18.111 { 00:20:18.111 "name": "7064403c-e783-4d35-9330-8686abf72103", 00:20:18.111 "aliases": [ 00:20:18.111 "lvs/nvme0n1p0" 00:20:18.111 ], 00:20:18.111 "product_name": "Logical Volume", 00:20:18.111 "block_size": 4096, 00:20:18.111 "num_blocks": 26476544, 00:20:18.111 "uuid": "7064403c-e783-4d35-9330-8686abf72103", 00:20:18.111 "assigned_rate_limits": { 00:20:18.111 "rw_ios_per_sec": 0, 00:20:18.111 "rw_mbytes_per_sec": 0, 00:20:18.111 "r_mbytes_per_sec": 0, 00:20:18.111 "w_mbytes_per_sec": 0 00:20:18.111 }, 00:20:18.111 "claimed": false, 00:20:18.111 "zoned": false, 00:20:18.111 "supported_io_types": { 00:20:18.111 "read": true, 00:20:18.111 "write": true, 00:20:18.111 "unmap": true, 00:20:18.111 "flush": false, 00:20:18.111 "reset": true, 00:20:18.111 "nvme_admin": false, 00:20:18.111 "nvme_io": false, 00:20:18.111 "nvme_io_md": false, 00:20:18.111 "write_zeroes": true, 00:20:18.111 "zcopy": false, 00:20:18.111 "get_zone_info": false, 00:20:18.111 "zone_management": false, 00:20:18.111 "zone_append": false, 00:20:18.111 "compare": false, 00:20:18.111 "compare_and_write": false, 00:20:18.111 "abort": false, 00:20:18.111 "seek_hole": true, 00:20:18.111 "seek_data": true, 00:20:18.111 "copy": false, 00:20:18.111 "nvme_iov_md": false 00:20:18.111 }, 00:20:18.111 "driver_specific": { 00:20:18.111 "lvol": { 00:20:18.111 "lvol_store_uuid": "c9e7e241-fcf9-44a9-aac2-c0e43e734204", 00:20:18.111 "base_bdev": "nvme0n1", 00:20:18.111 "thin_provision": true, 00:20:18.111 "num_allocated_clusters": 0, 00:20:18.111 "snapshot": false, 00:20:18.111 "clone": false, 00:20:18.111 "esnap_clone": false 00:20:18.111 } 00:20:18.111 } 00:20:18.111 } 00:20:18.111 ]' 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:18.111 11:04:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:18.111 11:04:04 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:18.111 11:04:04 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:18.111 11:04:04 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:18.370 11:04:05 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:18.370 11:04:05 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:18.370 11:04:05 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 7064403c-e783-4d35-9330-8686abf72103 00:20:18.370 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7064403c-e783-4d35-9330-8686abf72103 00:20:18.370 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:18.370 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:18.370 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:18.370 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7064403c-e783-4d35-9330-8686abf72103 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:18.628 { 00:20:18.628 "name": "7064403c-e783-4d35-9330-8686abf72103", 00:20:18.628 "aliases": [ 00:20:18.628 "lvs/nvme0n1p0" 00:20:18.628 ], 00:20:18.628 "product_name": "Logical Volume", 00:20:18.628 "block_size": 4096, 00:20:18.628 "num_blocks": 26476544, 00:20:18.628 "uuid": "7064403c-e783-4d35-9330-8686abf72103", 00:20:18.628 "assigned_rate_limits": { 00:20:18.628 "rw_ios_per_sec": 0, 00:20:18.628 "rw_mbytes_per_sec": 0, 00:20:18.628 "r_mbytes_per_sec": 0, 00:20:18.628 "w_mbytes_per_sec": 0 00:20:18.628 }, 00:20:18.628 "claimed": false, 00:20:18.628 "zoned": false, 00:20:18.628 "supported_io_types": { 00:20:18.628 "read": true, 00:20:18.628 "write": true, 00:20:18.628 "unmap": true, 00:20:18.628 "flush": false, 00:20:18.628 "reset": true, 00:20:18.628 "nvme_admin": false, 00:20:18.628 "nvme_io": false, 00:20:18.628 "nvme_io_md": false, 00:20:18.628 "write_zeroes": true, 00:20:18.628 "zcopy": false, 00:20:18.628 "get_zone_info": false, 00:20:18.628 "zone_management": false, 00:20:18.628 "zone_append": false, 00:20:18.628 "compare": false, 00:20:18.628 "compare_and_write": false, 00:20:18.628 "abort": false, 00:20:18.628 "seek_hole": true, 00:20:18.628 "seek_data": true, 00:20:18.628 "copy": false, 00:20:18.628 "nvme_iov_md": false 00:20:18.628 }, 00:20:18.628 "driver_specific": { 00:20:18.628 "lvol": { 00:20:18.628 "lvol_store_uuid": "c9e7e241-fcf9-44a9-aac2-c0e43e734204", 00:20:18.628 "base_bdev": "nvme0n1", 00:20:18.628 "thin_provision": true, 00:20:18.628 "num_allocated_clusters": 0, 00:20:18.628 "snapshot": false, 00:20:18.628 "clone": false, 00:20:18.628 "esnap_clone": false 00:20:18.628 } 00:20:18.628 } 00:20:18.628 } 00:20:18.628 ]' 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:18.628 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:18.628 11:04:05 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:18.628 11:04:05 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:18.888 11:04:05 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:18.888 11:04:05 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 7064403c-e783-4d35-9330-8686abf72103 00:20:18.888 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=7064403c-e783-4d35-9330-8686abf72103 00:20:18.888 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:18.888 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:18.888 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:18.888 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7064403c-e783-4d35-9330-8686abf72103 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:19.147 { 00:20:19.147 "name": "7064403c-e783-4d35-9330-8686abf72103", 00:20:19.147 "aliases": [ 00:20:19.147 "lvs/nvme0n1p0" 00:20:19.147 ], 00:20:19.147 "product_name": "Logical Volume", 00:20:19.147 "block_size": 4096, 00:20:19.147 "num_blocks": 26476544, 00:20:19.147 "uuid": "7064403c-e783-4d35-9330-8686abf72103", 00:20:19.147 "assigned_rate_limits": { 00:20:19.147 "rw_ios_per_sec": 0, 00:20:19.147 "rw_mbytes_per_sec": 0, 00:20:19.147 "r_mbytes_per_sec": 0, 00:20:19.147 "w_mbytes_per_sec": 0 00:20:19.147 }, 00:20:19.147 "claimed": false, 00:20:19.147 "zoned": false, 00:20:19.147 "supported_io_types": { 00:20:19.147 "read": true, 00:20:19.147 "write": true, 00:20:19.147 "unmap": true, 00:20:19.147 "flush": false, 00:20:19.147 "reset": true, 00:20:19.147 "nvme_admin": false, 00:20:19.147 "nvme_io": false, 00:20:19.147 "nvme_io_md": false, 00:20:19.147 "write_zeroes": true, 00:20:19.147 "zcopy": false, 00:20:19.147 "get_zone_info": false, 00:20:19.147 "zone_management": false, 00:20:19.147 "zone_append": false, 00:20:19.147 "compare": false, 00:20:19.147 "compare_and_write": false, 00:20:19.147 "abort": false, 00:20:19.147 "seek_hole": true, 00:20:19.147 "seek_data": true, 00:20:19.147 "copy": false, 00:20:19.147 "nvme_iov_md": false 00:20:19.147 }, 00:20:19.147 "driver_specific": { 00:20:19.147 "lvol": { 00:20:19.147 "lvol_store_uuid": "c9e7e241-fcf9-44a9-aac2-c0e43e734204", 00:20:19.147 "base_bdev": "nvme0n1", 00:20:19.147 "thin_provision": true, 00:20:19.147 "num_allocated_clusters": 0, 00:20:19.147 "snapshot": false, 00:20:19.147 "clone": false, 00:20:19.147 "esnap_clone": false 00:20:19.147 } 00:20:19.147 } 00:20:19.147 } 00:20:19.147 ]' 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:19.147 11:04:05 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:19.147 11:04:05 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:19.147 11:04:05 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7064403c-e783-4d35-9330-8686abf72103 --l2p_dram_limit 10' 00:20:19.147 11:04:05 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:19.147 11:04:05 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:19.148 11:04:05 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:19.148 11:04:05 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:19.148 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:19.148 11:04:05 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7064403c-e783-4d35-9330-8686abf72103 --l2p_dram_limit 10 -c nvc0n1p0 00:20:19.407 [2024-11-15 11:04:06.108239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.108290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.407 [2024-11-15 11:04:06.108308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:19.407 [2024-11-15 11:04:06.108320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.108405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.108418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.407 [2024-11-15 11:04:06.108432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:19.407 [2024-11-15 11:04:06.108442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.108473] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.407 [2024-11-15 11:04:06.109501] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.407 [2024-11-15 11:04:06.109564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.109577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.407 [2024-11-15 11:04:06.109591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:20:19.407 [2024-11-15 11:04:06.109601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.109738] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8e67806c-8a2b-44dc-bcca-3a4948b5bfb5 00:20:19.407 [2024-11-15 11:04:06.111169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.111209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:19.407 [2024-11-15 11:04:06.111222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:19.407 [2024-11-15 11:04:06.111235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.118782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.118814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.407 [2024-11-15 11:04:06.118829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.515 ms 00:20:19.407 [2024-11-15 11:04:06.118842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.118954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.118970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.407 [2024-11-15 11:04:06.118982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:20:19.407 [2024-11-15 11:04:06.118999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.119079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.119095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.407 [2024-11-15 11:04:06.119105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.407 [2024-11-15 11:04:06.119121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.119162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.407 [2024-11-15 11:04:06.124580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.124616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.407 [2024-11-15 11:04:06.124649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.446 ms 00:20:19.407 [2024-11-15 11:04:06.124660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.124705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.124716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.407 [2024-11-15 11:04:06.124730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:19.407 [2024-11-15 11:04:06.124740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.124778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:19.407 [2024-11-15 11:04:06.124910] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.407 [2024-11-15 11:04:06.124930] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.407 [2024-11-15 11:04:06.124944] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.407 [2024-11-15 11:04:06.124960] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.407 [2024-11-15 11:04:06.124972] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.407 [2024-11-15 11:04:06.124986] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:19.407 [2024-11-15 11:04:06.124996] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.407 [2024-11-15 11:04:06.125011] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.407 [2024-11-15 11:04:06.125021] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.407 [2024-11-15 11:04:06.125033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.125044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.407 [2024-11-15 11:04:06.125057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:20:19.407 [2024-11-15 11:04:06.125077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.125155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.407 [2024-11-15 11:04:06.125166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.407 [2024-11-15 11:04:06.125179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:19.407 [2024-11-15 11:04:06.125189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.407 [2024-11-15 11:04:06.125284] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.407 [2024-11-15 11:04:06.125298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.407 [2024-11-15 11:04:06.125311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.407 [2024-11-15 11:04:06.125322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.407 [2024-11-15 11:04:06.125344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:19.407 [2024-11-15 11:04:06.125365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.407 [2024-11-15 11:04:06.125378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.407 [2024-11-15 11:04:06.125398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.407 [2024-11-15 11:04:06.125407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:19.407 [2024-11-15 11:04:06.125419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.407 [2024-11-15 11:04:06.125428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.407 [2024-11-15 11:04:06.125440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:19.407 [2024-11-15 11:04:06.125449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.407 [2024-11-15 11:04:06.125472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:19.407 [2024-11-15 11:04:06.125485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.407 [2024-11-15 11:04:06.125507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.407 [2024-11-15 11:04:06.125546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.407 [2024-11-15 11:04:06.125557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:19.407 [2024-11-15 11:04:06.125568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.407 [2024-11-15 11:04:06.125577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.408 [2024-11-15 11:04:06.125589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:19.408 [2024-11-15 11:04:06.125598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.408 [2024-11-15 11:04:06.125610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.408 [2024-11-15 11:04:06.125619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:19.408 [2024-11-15 11:04:06.125631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.408 [2024-11-15 11:04:06.125640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.408 [2024-11-15 11:04:06.125654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:19.408 [2024-11-15 11:04:06.125663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.408 [2024-11-15 11:04:06.125674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.408 [2024-11-15 11:04:06.125684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:19.408 [2024-11-15 11:04:06.125696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.408 [2024-11-15 11:04:06.125705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.408 [2024-11-15 11:04:06.125717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:19.408 [2024-11-15 11:04:06.125726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.408 [2024-11-15 11:04:06.125737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.408 [2024-11-15 11:04:06.125747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:19.408 [2024-11-15 11:04:06.125758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.408 [2024-11-15 11:04:06.125767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.408 [2024-11-15 11:04:06.125779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.408 [2024-11-15 11:04:06.125790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.408 [2024-11-15 11:04:06.125804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.408 [2024-11-15 11:04:06.125815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.408 [2024-11-15 11:04:06.125829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.408 [2024-11-15 11:04:06.125839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.408 [2024-11-15 11:04:06.125851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.408 [2024-11-15 11:04:06.125860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.408 [2024-11-15 11:04:06.125872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.408 [2024-11-15 11:04:06.125886] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.408 [2024-11-15 11:04:06.125901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.125916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:19.408 [2024-11-15 11:04:06.125929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:19.408 [2024-11-15 11:04:06.125939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:19.408 [2024-11-15 11:04:06.125952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:19.408 [2024-11-15 11:04:06.125963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:19.408 [2024-11-15 11:04:06.125976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:19.408 [2024-11-15 11:04:06.125986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:19.408 [2024-11-15 11:04:06.125998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:19.408 [2024-11-15 11:04:06.126009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:19.408 [2024-11-15 11:04:06.126024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.126035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.126047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.126058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.126073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:19.408 [2024-11-15 11:04:06.126084] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.408 [2024-11-15 11:04:06.126098] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.126110] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.408 [2024-11-15 11:04:06.126123] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.408 [2024-11-15 11:04:06.126134] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.408 [2024-11-15 11:04:06.126146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.408 [2024-11-15 11:04:06.126157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.408 [2024-11-15 11:04:06.126171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.408 [2024-11-15 11:04:06.126187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.932 ms 00:20:19.408 [2024-11-15 11:04:06.126199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.408 [2024-11-15 11:04:06.126258] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:19.408 [2024-11-15 11:04:06.126284] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:23.627 [2024-11-15 11:04:09.792873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.792946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:23.627 [2024-11-15 11:04:09.792964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3672.581 ms 00:20:23.627 [2024-11-15 11:04:09.792978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.833125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.833184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:23.627 [2024-11-15 11:04:09.833200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.820 ms 00:20:23.627 [2024-11-15 11:04:09.833214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.833355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.833372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:23.627 [2024-11-15 11:04:09.833384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:23.627 [2024-11-15 11:04:09.833400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.881684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.881740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:23.627 [2024-11-15 11:04:09.881755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.295 ms 00:20:23.627 [2024-11-15 11:04:09.881770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.881814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.881833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:23.627 [2024-11-15 11:04:09.881844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.627 [2024-11-15 11:04:09.881857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.882350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.882378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:23.627 [2024-11-15 11:04:09.882390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:20:23.627 [2024-11-15 11:04:09.882403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.882503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.882517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:23.627 [2024-11-15 11:04:09.882545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:23.627 [2024-11-15 11:04:09.882561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.903721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.903772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:23.627 [2024-11-15 11:04:09.903787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.173 ms 00:20:23.627 [2024-11-15 11:04:09.903817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:09.916538] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:23.627 [2024-11-15 11:04:09.919779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:09.919808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:23.627 [2024-11-15 11:04:09.919823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.885 ms 00:20:23.627 [2024-11-15 11:04:09.919834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.021642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.021708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:23.627 [2024-11-15 11:04:10.021728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.916 ms 00:20:23.627 [2024-11-15 11:04:10.021740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.021948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.021965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:23.627 [2024-11-15 11:04:10.021982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:20:23.627 [2024-11-15 11:04:10.021993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.058177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.058217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:23.627 [2024-11-15 11:04:10.058235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.186 ms 00:20:23.627 [2024-11-15 11:04:10.058245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.093682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.093718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:23.627 [2024-11-15 11:04:10.093735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.443 ms 00:20:23.627 [2024-11-15 11:04:10.093745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.094453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.094481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:23.627 [2024-11-15 11:04:10.094495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:20:23.627 [2024-11-15 11:04:10.094506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.194184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.194235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:23.627 [2024-11-15 11:04:10.194258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.764 ms 00:20:23.627 [2024-11-15 11:04:10.194270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.231581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.231629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:23.627 [2024-11-15 11:04:10.231646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.278 ms 00:20:23.627 [2024-11-15 11:04:10.231657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.268275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.268317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:23.627 [2024-11-15 11:04:10.268334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.626 ms 00:20:23.627 [2024-11-15 11:04:10.268344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.304482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.304530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:23.627 [2024-11-15 11:04:10.304547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.146 ms 00:20:23.627 [2024-11-15 11:04:10.304559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.304611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.304623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:23.627 [2024-11-15 11:04:10.304640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:23.627 [2024-11-15 11:04:10.304650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.304767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.627 [2024-11-15 11:04:10.304780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:23.627 [2024-11-15 11:04:10.304797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:23.627 [2024-11-15 11:04:10.304807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.627 [2024-11-15 11:04:10.305818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4203.947 ms, result 0 00:20:23.627 { 00:20:23.627 "name": "ftl0", 00:20:23.627 "uuid": "8e67806c-8a2b-44dc-bcca-3a4948b5bfb5" 00:20:23.627 } 00:20:23.627 11:04:10 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:23.627 11:04:10 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:23.886 11:04:10 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:23.886 11:04:10 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:23.886 [2024-11-15 11:04:10.728769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.886 [2024-11-15 11:04:10.728836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:23.886 [2024-11-15 11:04:10.728853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:23.886 [2024-11-15 11:04:10.728875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.886 [2024-11-15 11:04:10.728903] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:23.886 [2024-11-15 11:04:10.733020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.886 [2024-11-15 11:04:10.733053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:23.886 [2024-11-15 11:04:10.733069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:20:23.886 [2024-11-15 11:04:10.733079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.886 [2024-11-15 11:04:10.733330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.886 [2024-11-15 11:04:10.733348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:23.886 [2024-11-15 11:04:10.733381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:20:23.886 [2024-11-15 11:04:10.733392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.886 [2024-11-15 11:04:10.735922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.886 [2024-11-15 11:04:10.735944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:23.886 [2024-11-15 11:04:10.735959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:20:23.886 [2024-11-15 11:04:10.735970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.886 [2024-11-15 11:04:10.740996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.886 [2024-11-15 11:04:10.741029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:23.886 [2024-11-15 11:04:10.741046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.008 ms 00:20:23.886 [2024-11-15 11:04:10.741073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.778138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.778177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:24.146 [2024-11-15 11:04:10.778194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.020 ms 00:20:24.146 [2024-11-15 11:04:10.778204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.799906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.799960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:24.146 [2024-11-15 11:04:10.799978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.687 ms 00:20:24.146 [2024-11-15 11:04:10.799988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.800158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.800172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:24.146 [2024-11-15 11:04:10.800187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:24.146 [2024-11-15 11:04:10.800198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.836517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.836558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:24.146 [2024-11-15 11:04:10.836591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.356 ms 00:20:24.146 [2024-11-15 11:04:10.836600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.872268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.872303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:24.146 [2024-11-15 11:04:10.872335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.682 ms 00:20:24.146 [2024-11-15 11:04:10.872344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.907762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.907808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:24.146 [2024-11-15 11:04:10.907841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.427 ms 00:20:24.146 [2024-11-15 11:04:10.907851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.943900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.146 [2024-11-15 11:04:10.943938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:24.146 [2024-11-15 11:04:10.943954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.006 ms 00:20:24.146 [2024-11-15 11:04:10.943980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.146 [2024-11-15 11:04:10.944023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:24.146 [2024-11-15 11:04:10.944040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:24.146 [2024-11-15 11:04:10.944056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:24.146 [2024-11-15 11:04:10.944066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:24.146 [2024-11-15 11:04:10.944080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:24.146 [2024-11-15 11:04:10.944091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.944998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:24.147 [2024-11-15 11:04:10.945326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:24.148 [2024-11-15 11:04:10.945339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:24.148 [2024-11-15 11:04:10.945350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:24.148 [2024-11-15 11:04:10.945362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:24.148 [2024-11-15 11:04:10.945373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:24.148 [2024-11-15 11:04:10.945387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:24.148 [2024-11-15 11:04:10.945406] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:24.148 [2024-11-15 11:04:10.945421] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e67806c-8a2b-44dc-bcca-3a4948b5bfb5 00:20:24.148 [2024-11-15 11:04:10.945433] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:24.148 [2024-11-15 11:04:10.945448] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:24.148 [2024-11-15 11:04:10.945457] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:24.148 [2024-11-15 11:04:10.945474] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:24.148 [2024-11-15 11:04:10.945484] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:24.148 [2024-11-15 11:04:10.945496] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:24.148 [2024-11-15 11:04:10.945506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:24.148 [2024-11-15 11:04:10.945517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:24.148 [2024-11-15 11:04:10.945547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:24.148 [2024-11-15 11:04:10.945560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.148 [2024-11-15 11:04:10.945571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:24.148 [2024-11-15 11:04:10.945584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.541 ms 00:20:24.148 [2024-11-15 11:04:10.945594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.148 [2024-11-15 11:04:10.965042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.148 [2024-11-15 11:04:10.965077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:24.148 [2024-11-15 11:04:10.965092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.419 ms 00:20:24.148 [2024-11-15 11:04:10.965119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.148 [2024-11-15 11:04:10.965740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.148 [2024-11-15 11:04:10.965759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:24.148 [2024-11-15 11:04:10.965773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.592 ms 00:20:24.148 [2024-11-15 11:04:10.965786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.031662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.031700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.407 [2024-11-15 11:04:11.031716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.031743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.031807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.031818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.407 [2024-11-15 11:04:11.031831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.031844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.031950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.031976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.407 [2024-11-15 11:04:11.031989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.032000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.032025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.032036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.407 [2024-11-15 11:04:11.032049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.032060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.156500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.156559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.407 [2024-11-15 11:04:11.156578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.156589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.257466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.257547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.407 [2024-11-15 11:04:11.257567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.257582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.257711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.257724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:24.407 [2024-11-15 11:04:11.257738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.257748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.257811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.257823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:24.407 [2024-11-15 11:04:11.257836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.257846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.258103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.258119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:24.407 [2024-11-15 11:04:11.258132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.258143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.258191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.258203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:24.407 [2024-11-15 11:04:11.258216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.258226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.258272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.258285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:24.407 [2024-11-15 11:04:11.258298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.258308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.258358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:24.407 [2024-11-15 11:04:11.258370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:24.407 [2024-11-15 11:04:11.258382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:24.407 [2024-11-15 11:04:11.258393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.407 [2024-11-15 11:04:11.258544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.582 ms, result 0 00:20:24.407 true 00:20:24.666 11:04:11 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76291 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76291 ']' 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76291 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76291 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.666 killing process with pid 76291 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76291' 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 76291 00:20:24.666 11:04:11 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 76291 00:20:29.941 11:04:16 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:34.136 262144+0 records in 00:20:34.136 262144+0 records out 00:20:34.136 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.2989 s, 250 MB/s 00:20:34.136 11:04:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:35.513 11:04:22 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.513 [2024-11-15 11:04:22.355742] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:20:35.513 [2024-11-15 11:04:22.355881] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76545 ] 00:20:35.772 [2024-11-15 11:04:22.546256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.030 [2024-11-15 11:04:22.662126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.288 [2024-11-15 11:04:23.034049] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.288 [2024-11-15 11:04:23.034121] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:36.549 [2024-11-15 11:04:23.201805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.201857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:36.549 [2024-11-15 11:04:23.201879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.549 [2024-11-15 11:04:23.201891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.201947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.201960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:36.549 [2024-11-15 11:04:23.201975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:36.549 [2024-11-15 11:04:23.201985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.202007] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:36.549 [2024-11-15 11:04:23.202917] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:36.549 [2024-11-15 11:04:23.202946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.202958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:36.549 [2024-11-15 11:04:23.202970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:20:36.549 [2024-11-15 11:04:23.202980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.204418] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:36.549 [2024-11-15 11:04:23.223961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.224004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:36.549 [2024-11-15 11:04:23.224021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.574 ms 00:20:36.549 [2024-11-15 11:04:23.224032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.224104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.224117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:36.549 [2024-11-15 11:04:23.224128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:36.549 [2024-11-15 11:04:23.224139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.231012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.231041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:36.549 [2024-11-15 11:04:23.231053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.808 ms 00:20:36.549 [2024-11-15 11:04:23.231064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.231148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.231162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:36.549 [2024-11-15 11:04:23.231173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:36.549 [2024-11-15 11:04:23.231183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.231225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.231237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:36.549 [2024-11-15 11:04:23.231248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:36.549 [2024-11-15 11:04:23.231258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.231283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:36.549 [2024-11-15 11:04:23.236608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.236749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:36.549 [2024-11-15 11:04:23.236877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.339 ms 00:20:36.549 [2024-11-15 11:04:23.236923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.236980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.237013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:36.549 [2024-11-15 11:04:23.237094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:36.549 [2024-11-15 11:04:23.237129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.237210] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:36.549 [2024-11-15 11:04:23.237260] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:36.549 [2024-11-15 11:04:23.237374] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:36.549 [2024-11-15 11:04:23.237400] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:36.549 [2024-11-15 11:04:23.237492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:36.549 [2024-11-15 11:04:23.237506] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:36.549 [2024-11-15 11:04:23.237520] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:36.549 [2024-11-15 11:04:23.237561] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:36.549 [2024-11-15 11:04:23.237575] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:36.549 [2024-11-15 11:04:23.237587] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:36.549 [2024-11-15 11:04:23.237597] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:36.549 [2024-11-15 11:04:23.237607] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:36.549 [2024-11-15 11:04:23.237617] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:36.549 [2024-11-15 11:04:23.237631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.237642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:36.549 [2024-11-15 11:04:23.237653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:20:36.549 [2024-11-15 11:04:23.237663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.237746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.549 [2024-11-15 11:04:23.237758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:36.549 [2024-11-15 11:04:23.237768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:36.549 [2024-11-15 11:04:23.237779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.549 [2024-11-15 11:04:23.237874] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:36.549 [2024-11-15 11:04:23.237893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:36.549 [2024-11-15 11:04:23.237904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.549 [2024-11-15 11:04:23.237915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.549 [2024-11-15 11:04:23.237926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:36.549 [2024-11-15 11:04:23.237935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:36.549 [2024-11-15 11:04:23.237945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:36.549 [2024-11-15 11:04:23.237954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:36.549 [2024-11-15 11:04:23.237963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:36.549 [2024-11-15 11:04:23.237972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.549 [2024-11-15 11:04:23.237981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:36.549 [2024-11-15 11:04:23.237992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:36.549 [2024-11-15 11:04:23.238001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:36.549 [2024-11-15 11:04:23.238010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:36.550 [2024-11-15 11:04:23.238020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:36.550 [2024-11-15 11:04:23.238039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:36.550 [2024-11-15 11:04:23.238058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:36.550 [2024-11-15 11:04:23.238086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:36.550 [2024-11-15 11:04:23.238114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238123] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:36.550 [2024-11-15 11:04:23.238141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:36.550 [2024-11-15 11:04:23.238168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:36.550 [2024-11-15 11:04:23.238196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.550 [2024-11-15 11:04:23.238214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:36.550 [2024-11-15 11:04:23.238223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:36.550 [2024-11-15 11:04:23.238232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:36.550 [2024-11-15 11:04:23.238241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:36.550 [2024-11-15 11:04:23.238250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:36.550 [2024-11-15 11:04:23.238259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:36.550 [2024-11-15 11:04:23.238277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:36.550 [2024-11-15 11:04:23.238286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238297] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:36.550 [2024-11-15 11:04:23.238307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:36.550 [2024-11-15 11:04:23.238316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:36.550 [2024-11-15 11:04:23.238337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:36.550 [2024-11-15 11:04:23.238346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:36.550 [2024-11-15 11:04:23.238355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:36.550 [2024-11-15 11:04:23.238365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:36.550 [2024-11-15 11:04:23.238374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:36.550 [2024-11-15 11:04:23.238383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:36.550 [2024-11-15 11:04:23.238394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:36.550 [2024-11-15 11:04:23.238407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238419] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:36.550 [2024-11-15 11:04:23.238429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:36.550 [2024-11-15 11:04:23.238440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:36.550 [2024-11-15 11:04:23.238451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:36.550 [2024-11-15 11:04:23.238461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:36.550 [2024-11-15 11:04:23.238471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:36.550 [2024-11-15 11:04:23.238481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:36.550 [2024-11-15 11:04:23.238491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:36.550 [2024-11-15 11:04:23.238501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:36.550 [2024-11-15 11:04:23.238512] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:36.550 [2024-11-15 11:04:23.238581] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:36.550 [2024-11-15 11:04:23.238596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238616] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:36.550 [2024-11-15 11:04:23.238628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:36.550 [2024-11-15 11:04:23.238638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:36.550 [2024-11-15 11:04:23.238649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:36.550 [2024-11-15 11:04:23.238660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.238670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:36.550 [2024-11-15 11:04:23.238681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:20:36.550 [2024-11-15 11:04:23.238691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.275743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.275919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:36.550 [2024-11-15 11:04:23.275945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.063 ms 00:20:36.550 [2024-11-15 11:04:23.275957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.276065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.276076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:36.550 [2024-11-15 11:04:23.276087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:20:36.550 [2024-11-15 11:04:23.276097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.335761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.335809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:36.550 [2024-11-15 11:04:23.335824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.686 ms 00:20:36.550 [2024-11-15 11:04:23.335835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.335908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.335920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:36.550 [2024-11-15 11:04:23.335932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:36.550 [2024-11-15 11:04:23.335951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.336450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.336465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:36.550 [2024-11-15 11:04:23.336476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:20:36.550 [2024-11-15 11:04:23.336486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.336635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.336651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:36.550 [2024-11-15 11:04:23.336662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:20:36.550 [2024-11-15 11:04:23.336681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.356503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.356556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:36.550 [2024-11-15 11:04:23.356578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.830 ms 00:20:36.550 [2024-11-15 11:04:23.356589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.376296] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:36.550 [2024-11-15 11:04:23.376336] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:36.550 [2024-11-15 11:04:23.376351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.550 [2024-11-15 11:04:23.376363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:36.550 [2024-11-15 11:04:23.376375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.656 ms 00:20:36.550 [2024-11-15 11:04:23.376385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.550 [2024-11-15 11:04:23.406465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.406646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:36.811 [2024-11-15 11:04:23.406683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.034 ms 00:20:36.811 [2024-11-15 11:04:23.406694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.424658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.424710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:36.811 [2024-11-15 11:04:23.424724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.948 ms 00:20:36.811 [2024-11-15 11:04:23.424734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.442813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.442850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:36.811 [2024-11-15 11:04:23.442864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.069 ms 00:20:36.811 [2024-11-15 11:04:23.442873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.443736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.443765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:36.811 [2024-11-15 11:04:23.443777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:20:36.811 [2024-11-15 11:04:23.443787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.532173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.532236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:36.811 [2024-11-15 11:04:23.532253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.499 ms 00:20:36.811 [2024-11-15 11:04:23.532287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.542829] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:36.811 [2024-11-15 11:04:23.545437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.545468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:36.811 [2024-11-15 11:04:23.545483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.123 ms 00:20:36.811 [2024-11-15 11:04:23.545510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.545614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.545629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:36.811 [2024-11-15 11:04:23.545641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:36.811 [2024-11-15 11:04:23.545651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.545749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.545762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:36.811 [2024-11-15 11:04:23.545773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:36.811 [2024-11-15 11:04:23.545783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.545807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.545819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:36.811 [2024-11-15 11:04:23.545829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:36.811 [2024-11-15 11:04:23.545839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.545870] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:36.811 [2024-11-15 11:04:23.545881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.545894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:36.811 [2024-11-15 11:04:23.545905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:36.811 [2024-11-15 11:04:23.545915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.582945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.582983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:36.811 [2024-11-15 11:04:23.582998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.068 ms 00:20:36.811 [2024-11-15 11:04:23.583026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.583106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:36.811 [2024-11-15 11:04:23.583119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:36.811 [2024-11-15 11:04:23.583131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:36.811 [2024-11-15 11:04:23.583141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:36.811 [2024-11-15 11:04:23.584222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.597 ms, result 0 00:20:37.750  [2024-11-15T11:04:26.022Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-15T11:04:26.631Z] Copying: 47/1024 [MB] (24 MBps) [2024-11-15T11:04:28.009Z] Copying: 71/1024 [MB] (24 MBps) [2024-11-15T11:04:28.947Z] Copying: 93/1024 [MB] (21 MBps) [2024-11-15T11:04:29.884Z] Copying: 116/1024 [MB] (22 MBps) [2024-11-15T11:04:30.820Z] Copying: 138/1024 [MB] (22 MBps) [2024-11-15T11:04:31.759Z] Copying: 161/1024 [MB] (22 MBps) [2024-11-15T11:04:32.697Z] Copying: 183/1024 [MB] (22 MBps) [2024-11-15T11:04:33.637Z] Copying: 205/1024 [MB] (21 MBps) [2024-11-15T11:04:35.018Z] Copying: 227/1024 [MB] (22 MBps) [2024-11-15T11:04:35.586Z] Copying: 250/1024 [MB] (22 MBps) [2024-11-15T11:04:36.967Z] Copying: 272/1024 [MB] (22 MBps) [2024-11-15T11:04:37.905Z] Copying: 294/1024 [MB] (22 MBps) [2024-11-15T11:04:38.843Z] Copying: 317/1024 [MB] (22 MBps) [2024-11-15T11:04:39.783Z] Copying: 339/1024 [MB] (22 MBps) [2024-11-15T11:04:40.718Z] Copying: 362/1024 [MB] (22 MBps) [2024-11-15T11:04:41.655Z] Copying: 385/1024 [MB] (22 MBps) [2024-11-15T11:04:42.594Z] Copying: 409/1024 [MB] (24 MBps) [2024-11-15T11:04:44.003Z] Copying: 433/1024 [MB] (24 MBps) [2024-11-15T11:04:44.572Z] Copying: 458/1024 [MB] (24 MBps) [2024-11-15T11:04:45.971Z] Copying: 484/1024 [MB] (25 MBps) [2024-11-15T11:04:46.911Z] Copying: 508/1024 [MB] (24 MBps) [2024-11-15T11:04:47.849Z] Copying: 532/1024 [MB] (23 MBps) [2024-11-15T11:04:48.788Z] Copying: 555/1024 [MB] (23 MBps) [2024-11-15T11:04:49.729Z] Copying: 577/1024 [MB] (22 MBps) [2024-11-15T11:04:50.671Z] Copying: 600/1024 [MB] (22 MBps) [2024-11-15T11:04:51.609Z] Copying: 623/1024 [MB] (22 MBps) [2024-11-15T11:04:52.987Z] Copying: 646/1024 [MB] (22 MBps) [2024-11-15T11:04:53.555Z] Copying: 669/1024 [MB] (23 MBps) [2024-11-15T11:04:54.932Z] Copying: 692/1024 [MB] (23 MBps) [2024-11-15T11:04:55.869Z] Copying: 715/1024 [MB] (23 MBps) [2024-11-15T11:04:56.807Z] Copying: 738/1024 [MB] (23 MBps) [2024-11-15T11:04:57.772Z] Copying: 762/1024 [MB] (23 MBps) [2024-11-15T11:04:58.709Z] Copying: 784/1024 [MB] (22 MBps) [2024-11-15T11:04:59.646Z] Copying: 806/1024 [MB] (22 MBps) [2024-11-15T11:05:00.582Z] Copying: 829/1024 [MB] (22 MBps) [2024-11-15T11:05:01.960Z] Copying: 852/1024 [MB] (23 MBps) [2024-11-15T11:05:02.900Z] Copying: 875/1024 [MB] (22 MBps) [2024-11-15T11:05:03.839Z] Copying: 898/1024 [MB] (22 MBps) [2024-11-15T11:05:04.778Z] Copying: 921/1024 [MB] (23 MBps) [2024-11-15T11:05:05.717Z] Copying: 945/1024 [MB] (23 MBps) [2024-11-15T11:05:06.654Z] Copying: 969/1024 [MB] (24 MBps) [2024-11-15T11:05:07.594Z] Copying: 993/1024 [MB] (24 MBps) [2024-11-15T11:05:07.854Z] Copying: 1018/1024 [MB] (24 MBps) [2024-11-15T11:05:07.854Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-15 11:05:07.754214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.993 [2024-11-15 11:05:07.754284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:20.993 [2024-11-15 11:05:07.754310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:20.993 [2024-11-15 11:05:07.754326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.754352] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:20.994 [2024-11-15 11:05:07.759394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.759443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:20.994 [2024-11-15 11:05:07.759460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:21:20.994 [2024-11-15 11:05:07.759473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.761395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.761450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:20.994 [2024-11-15 11:05:07.761466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:21:20.994 [2024-11-15 11:05:07.761479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.780216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.780286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:20.994 [2024-11-15 11:05:07.780302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.745 ms 00:21:20.994 [2024-11-15 11:05:07.780332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.785316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.785376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:20.994 [2024-11-15 11:05:07.785391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.952 ms 00:21:20.994 [2024-11-15 11:05:07.785420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.822297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.822491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:20.994 [2024-11-15 11:05:07.822517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.847 ms 00:21:20.994 [2024-11-15 11:05:07.822546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.843507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.843558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:20.994 [2024-11-15 11:05:07.843574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.950 ms 00:21:20.994 [2024-11-15 11:05:07.843586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:20.994 [2024-11-15 11:05:07.843720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:20.994 [2024-11-15 11:05:07.843735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:20.994 [2024-11-15 11:05:07.843761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:20.994 [2024-11-15 11:05:07.843772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.255 [2024-11-15 11:05:07.878353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.255 [2024-11-15 11:05:07.878397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:21.255 [2024-11-15 11:05:07.878412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.618 ms 00:21:21.255 [2024-11-15 11:05:07.878423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.255 [2024-11-15 11:05:07.913088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.255 [2024-11-15 11:05:07.913133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:21.255 [2024-11-15 11:05:07.913181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.679 ms 00:21:21.255 [2024-11-15 11:05:07.913194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.255 [2024-11-15 11:05:07.947130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.255 [2024-11-15 11:05:07.947173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:21.255 [2024-11-15 11:05:07.947189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.948 ms 00:21:21.255 [2024-11-15 11:05:07.947201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.255 [2024-11-15 11:05:07.980851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.255 [2024-11-15 11:05:07.981024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:21.255 [2024-11-15 11:05:07.981047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.623 ms 00:21:21.255 [2024-11-15 11:05:07.981060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.255 [2024-11-15 11:05:07.981107] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:21.255 [2024-11-15 11:05:07.981140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:21.255 [2024-11-15 11:05:07.981816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.981997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:21.256 [2024-11-15 11:05:07.982441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:21.256 [2024-11-15 11:05:07.982461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e67806c-8a2b-44dc-bcca-3a4948b5bfb5 00:21:21.256 [2024-11-15 11:05:07.982474] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:21.256 [2024-11-15 11:05:07.982492] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:21.256 [2024-11-15 11:05:07.982503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:21.256 [2024-11-15 11:05:07.982516] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:21.256 [2024-11-15 11:05:07.982562] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:21.256 [2024-11-15 11:05:07.982576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:21.256 [2024-11-15 11:05:07.982587] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:21.256 [2024-11-15 11:05:07.982611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:21.256 [2024-11-15 11:05:07.982623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:21.256 [2024-11-15 11:05:07.982635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.256 [2024-11-15 11:05:07.982647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:21.256 [2024-11-15 11:05:07.982661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:21:21.256 [2024-11-15 11:05:07.982672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.256 [2024-11-15 11:05:08.002770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.256 [2024-11-15 11:05:08.002810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:21.256 [2024-11-15 11:05:08.002826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.087 ms 00:21:21.256 [2024-11-15 11:05:08.002837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.256 [2024-11-15 11:05:08.003457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.256 [2024-11-15 11:05:08.003493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:21.256 [2024-11-15 11:05:08.003507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:21:21.256 [2024-11-15 11:05:08.003519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.256 [2024-11-15 11:05:08.057709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.256 [2024-11-15 11:05:08.057895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.256 [2024-11-15 11:05:08.057920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.256 [2024-11-15 11:05:08.057934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.256 [2024-11-15 11:05:08.058002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.256 [2024-11-15 11:05:08.058017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.256 [2024-11-15 11:05:08.058032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.256 [2024-11-15 11:05:08.058045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.256 [2024-11-15 11:05:08.058143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.256 [2024-11-15 11:05:08.058159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.256 [2024-11-15 11:05:08.058173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.256 [2024-11-15 11:05:08.058186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.256 [2024-11-15 11:05:08.058208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.256 [2024-11-15 11:05:08.058221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.256 [2024-11-15 11:05:08.058234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.256 [2024-11-15 11:05:08.058246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.190314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.190399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.515 [2024-11-15 11:05:08.190421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.190435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.294727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.294805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.515 [2024-11-15 11:05:08.294824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.294837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.294981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.295005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:21.515 [2024-11-15 11:05:08.295019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.295031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.295085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.295099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:21.515 [2024-11-15 11:05:08.295112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.295125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.295265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.295291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:21.515 [2024-11-15 11:05:08.295304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.295315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.295363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.295377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:21.515 [2024-11-15 11:05:08.295389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.295402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.515 [2024-11-15 11:05:08.295451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.515 [2024-11-15 11:05:08.295465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:21.515 [2024-11-15 11:05:08.295487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.515 [2024-11-15 11:05:08.295499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.516 [2024-11-15 11:05:08.295628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:21.516 [2024-11-15 11:05:08.295647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:21.516 [2024-11-15 11:05:08.295661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:21.516 [2024-11-15 11:05:08.295674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.516 [2024-11-15 11:05:08.295914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.484 ms, result 0 00:21:22.895 00:21:22.895 00:21:22.895 11:05:09 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:22.895 [2024-11-15 11:05:09.529966] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:21:22.895 [2024-11-15 11:05:09.530427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77022 ] 00:21:22.895 [2024-11-15 11:05:09.711620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.154 [2024-11-15 11:05:09.847019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.414 [2024-11-15 11:05:10.270442] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.414 [2024-11-15 11:05:10.272510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:23.674 [2024-11-15 11:05:10.440915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.440979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:23.674 [2024-11-15 11:05:10.441003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:23.674 [2024-11-15 11:05:10.441014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.441064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.441077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:23.674 [2024-11-15 11:05:10.441091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:23.674 [2024-11-15 11:05:10.441102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.441123] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:23.674 [2024-11-15 11:05:10.442094] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:23.674 [2024-11-15 11:05:10.442118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.442129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:23.674 [2024-11-15 11:05:10.442140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:21:23.674 [2024-11-15 11:05:10.442151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.443636] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:23.674 [2024-11-15 11:05:10.462556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.462723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:23.674 [2024-11-15 11:05:10.462747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.952 ms 00:21:23.674 [2024-11-15 11:05:10.462759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.462828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.462841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:23.674 [2024-11-15 11:05:10.462853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:23.674 [2024-11-15 11:05:10.462863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.469706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.469851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:23.674 [2024-11-15 11:05:10.469873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.776 ms 00:21:23.674 [2024-11-15 11:05:10.469885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.469979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.469993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:23.674 [2024-11-15 11:05:10.470004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:21:23.674 [2024-11-15 11:05:10.470015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.470059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.470072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:23.674 [2024-11-15 11:05:10.470083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:23.674 [2024-11-15 11:05:10.470094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.470121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:23.674 [2024-11-15 11:05:10.474971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.475005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:23.674 [2024-11-15 11:05:10.475018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.864 ms 00:21:23.674 [2024-11-15 11:05:10.475032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.475065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.475076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:23.674 [2024-11-15 11:05:10.475087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:23.674 [2024-11-15 11:05:10.475097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.475151] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:23.674 [2024-11-15 11:05:10.475175] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:23.674 [2024-11-15 11:05:10.475211] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:23.674 [2024-11-15 11:05:10.475233] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:23.674 [2024-11-15 11:05:10.475323] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:23.674 [2024-11-15 11:05:10.475337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:23.674 [2024-11-15 11:05:10.475350] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:23.674 [2024-11-15 11:05:10.475363] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:23.674 [2024-11-15 11:05:10.475375] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:23.674 [2024-11-15 11:05:10.475386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:23.674 [2024-11-15 11:05:10.475396] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:23.674 [2024-11-15 11:05:10.475406] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:23.674 [2024-11-15 11:05:10.475416] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:23.674 [2024-11-15 11:05:10.475431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.475441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:23.674 [2024-11-15 11:05:10.475452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:21:23.674 [2024-11-15 11:05:10.475462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.475553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.674 [2024-11-15 11:05:10.475565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:23.674 [2024-11-15 11:05:10.475576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:21:23.674 [2024-11-15 11:05:10.475586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.674 [2024-11-15 11:05:10.475682] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:23.674 [2024-11-15 11:05:10.475701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:23.674 [2024-11-15 11:05:10.475712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.674 [2024-11-15 11:05:10.475722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.674 [2024-11-15 11:05:10.475733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:23.675 [2024-11-15 11:05:10.475742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:23.675 [2024-11-15 11:05:10.475762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:23.675 [2024-11-15 11:05:10.475772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.675 [2024-11-15 11:05:10.475794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:23.675 [2024-11-15 11:05:10.475803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:23.675 [2024-11-15 11:05:10.475812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:23.675 [2024-11-15 11:05:10.475821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:23.675 [2024-11-15 11:05:10.475831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:23.675 [2024-11-15 11:05:10.475849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:23.675 [2024-11-15 11:05:10.475868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:23.675 [2024-11-15 11:05:10.475877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:23.675 [2024-11-15 11:05:10.475896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.675 [2024-11-15 11:05:10.475914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:23.675 [2024-11-15 11:05:10.475923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.675 [2024-11-15 11:05:10.475942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:23.675 [2024-11-15 11:05:10.475951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.675 [2024-11-15 11:05:10.475970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:23.675 [2024-11-15 11:05:10.475979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:23.675 [2024-11-15 11:05:10.475988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:23.675 [2024-11-15 11:05:10.475997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:23.675 [2024-11-15 11:05:10.476005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:23.675 [2024-11-15 11:05:10.476014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.675 [2024-11-15 11:05:10.476023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:23.675 [2024-11-15 11:05:10.476032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:23.675 [2024-11-15 11:05:10.476040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:23.675 [2024-11-15 11:05:10.476049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:23.675 [2024-11-15 11:05:10.476058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:23.675 [2024-11-15 11:05:10.476068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.675 [2024-11-15 11:05:10.476076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:23.675 [2024-11-15 11:05:10.476085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:23.675 [2024-11-15 11:05:10.476095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.675 [2024-11-15 11:05:10.476104] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:23.675 [2024-11-15 11:05:10.476114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:23.675 [2024-11-15 11:05:10.476123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:23.675 [2024-11-15 11:05:10.476133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:23.675 [2024-11-15 11:05:10.476143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:23.675 [2024-11-15 11:05:10.476152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:23.675 [2024-11-15 11:05:10.476161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:23.675 [2024-11-15 11:05:10.476170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:23.675 [2024-11-15 11:05:10.476180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:23.675 [2024-11-15 11:05:10.476189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:23.675 [2024-11-15 11:05:10.476199] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:23.675 [2024-11-15 11:05:10.476211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:23.675 [2024-11-15 11:05:10.476233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:23.675 [2024-11-15 11:05:10.476242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:23.675 [2024-11-15 11:05:10.476252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:23.675 [2024-11-15 11:05:10.476262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:23.675 [2024-11-15 11:05:10.476272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:23.675 [2024-11-15 11:05:10.476282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:23.675 [2024-11-15 11:05:10.476292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:23.675 [2024-11-15 11:05:10.476303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:23.675 [2024-11-15 11:05:10.476312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:23.675 [2024-11-15 11:05:10.476363] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:23.675 [2024-11-15 11:05:10.476378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:23.675 [2024-11-15 11:05:10.476399] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:23.675 [2024-11-15 11:05:10.476409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:23.675 [2024-11-15 11:05:10.476420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:23.675 [2024-11-15 11:05:10.476431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.675 [2024-11-15 11:05:10.476442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:23.675 [2024-11-15 11:05:10.476451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.805 ms 00:21:23.675 [2024-11-15 11:05:10.476461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.675 [2024-11-15 11:05:10.517208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.675 [2024-11-15 11:05:10.517248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:23.675 [2024-11-15 11:05:10.517263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.766 ms 00:21:23.675 [2024-11-15 11:05:10.517273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.675 [2024-11-15 11:05:10.517360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.675 [2024-11-15 11:05:10.517371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:23.675 [2024-11-15 11:05:10.517382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:23.675 [2024-11-15 11:05:10.517392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.579065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.579108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:23.934 [2024-11-15 11:05:10.579124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.711 ms 00:21:23.934 [2024-11-15 11:05:10.579135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.579179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.579191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:23.934 [2024-11-15 11:05:10.579202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:23.934 [2024-11-15 11:05:10.579216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.579730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.579745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:23.934 [2024-11-15 11:05:10.579757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:21:23.934 [2024-11-15 11:05:10.579767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.579888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.579901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:23.934 [2024-11-15 11:05:10.579912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:21:23.934 [2024-11-15 11:05:10.579928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.599725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.599762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:23.934 [2024-11-15 11:05:10.599780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.808 ms 00:21:23.934 [2024-11-15 11:05:10.599791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.619287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:23.934 [2024-11-15 11:05:10.619472] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:23.934 [2024-11-15 11:05:10.619493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.619504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:23.934 [2024-11-15 11:05:10.619516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.619 ms 00:21:23.934 [2024-11-15 11:05:10.619539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.649226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.649274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:23.934 [2024-11-15 11:05:10.649289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.689 ms 00:21:23.934 [2024-11-15 11:05:10.649317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.667203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.667242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:23.934 [2024-11-15 11:05:10.667256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:21:23.934 [2024-11-15 11:05:10.667266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.685788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.685824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:23.934 [2024-11-15 11:05:10.685838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.510 ms 00:21:23.934 [2024-11-15 11:05:10.685848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.686641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.686666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:23.934 [2024-11-15 11:05:10.686678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:21:23.934 [2024-11-15 11:05:10.686691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.771819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.771879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:23.934 [2024-11-15 11:05:10.771903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.241 ms 00:21:23.934 [2024-11-15 11:05:10.771914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.783514] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:23.934 [2024-11-15 11:05:10.786604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.786760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:23.934 [2024-11-15 11:05:10.786782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.650 ms 00:21:23.934 [2024-11-15 11:05:10.786793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.786901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.786915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:23.934 [2024-11-15 11:05:10.786926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:23.934 [2024-11-15 11:05:10.786941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.787052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.787069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:23.934 [2024-11-15 11:05:10.787081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:21:23.934 [2024-11-15 11:05:10.787090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.787118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.787130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:23.934 [2024-11-15 11:05:10.787141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:23.934 [2024-11-15 11:05:10.787150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:23.934 [2024-11-15 11:05:10.787187] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:23.934 [2024-11-15 11:05:10.787201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:23.934 [2024-11-15 11:05:10.787212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:23.934 [2024-11-15 11:05:10.787223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:23.934 [2024-11-15 11:05:10.787233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.192 [2024-11-15 11:05:10.824317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.192 [2024-11-15 11:05:10.824361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:24.192 [2024-11-15 11:05:10.824377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.123 ms 00:21:24.192 [2024-11-15 11:05:10.824395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.192 [2024-11-15 11:05:10.824477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:24.192 [2024-11-15 11:05:10.824491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:24.192 [2024-11-15 11:05:10.824502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:24.192 [2024-11-15 11:05:10.824512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:24.192 [2024-11-15 11:05:10.825623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.873 ms, result 0 00:21:25.570  [2024-11-15T11:05:13.369Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-15T11:05:14.349Z] Copying: 54/1024 [MB] (27 MBps) [2024-11-15T11:05:15.294Z] Copying: 81/1024 [MB] (26 MBps) [2024-11-15T11:05:16.231Z] Copying: 106/1024 [MB] (24 MBps) [2024-11-15T11:05:17.168Z] Copying: 131/1024 [MB] (25 MBps) [2024-11-15T11:05:18.105Z] Copying: 158/1024 [MB] (26 MBps) [2024-11-15T11:05:19.043Z] Copying: 186/1024 [MB] (27 MBps) [2024-11-15T11:05:20.420Z] Copying: 213/1024 [MB] (26 MBps) [2024-11-15T11:05:21.355Z] Copying: 239/1024 [MB] (26 MBps) [2024-11-15T11:05:22.305Z] Copying: 265/1024 [MB] (25 MBps) [2024-11-15T11:05:23.246Z] Copying: 292/1024 [MB] (26 MBps) [2024-11-15T11:05:24.181Z] Copying: 317/1024 [MB] (25 MBps) [2024-11-15T11:05:25.117Z] Copying: 345/1024 [MB] (27 MBps) [2024-11-15T11:05:26.110Z] Copying: 372/1024 [MB] (27 MBps) [2024-11-15T11:05:27.044Z] Copying: 398/1024 [MB] (26 MBps) [2024-11-15T11:05:28.422Z] Copying: 424/1024 [MB] (26 MBps) [2024-11-15T11:05:29.357Z] Copying: 453/1024 [MB] (28 MBps) [2024-11-15T11:05:30.292Z] Copying: 480/1024 [MB] (27 MBps) [2024-11-15T11:05:31.229Z] Copying: 508/1024 [MB] (27 MBps) [2024-11-15T11:05:32.165Z] Copying: 536/1024 [MB] (27 MBps) [2024-11-15T11:05:33.118Z] Copying: 562/1024 [MB] (26 MBps) [2024-11-15T11:05:34.054Z] Copying: 589/1024 [MB] (26 MBps) [2024-11-15T11:05:35.431Z] Copying: 616/1024 [MB] (27 MBps) [2024-11-15T11:05:36.368Z] Copying: 641/1024 [MB] (25 MBps) [2024-11-15T11:05:37.309Z] Copying: 667/1024 [MB] (25 MBps) [2024-11-15T11:05:38.244Z] Copying: 691/1024 [MB] (24 MBps) [2024-11-15T11:05:39.180Z] Copying: 716/1024 [MB] (24 MBps) [2024-11-15T11:05:40.117Z] Copying: 741/1024 [MB] (24 MBps) [2024-11-15T11:05:41.053Z] Copying: 767/1024 [MB] (26 MBps) [2024-11-15T11:05:42.431Z] Copying: 793/1024 [MB] (25 MBps) [2024-11-15T11:05:42.998Z] Copying: 818/1024 [MB] (25 MBps) [2024-11-15T11:05:44.404Z] Copying: 844/1024 [MB] (25 MBps) [2024-11-15T11:05:45.339Z] Copying: 869/1024 [MB] (25 MBps) [2024-11-15T11:05:46.276Z] Copying: 895/1024 [MB] (25 MBps) [2024-11-15T11:05:47.212Z] Copying: 921/1024 [MB] (25 MBps) [2024-11-15T11:05:48.149Z] Copying: 947/1024 [MB] (25 MBps) [2024-11-15T11:05:49.088Z] Copying: 973/1024 [MB] (25 MBps) [2024-11-15T11:05:50.026Z] Copying: 997/1024 [MB] (24 MBps) [2024-11-15T11:05:50.284Z] Copying: 1020/1024 [MB] (23 MBps) [2024-11-15T11:05:50.542Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-15 11:05:50.537005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.681 [2024-11-15 11:05:50.537094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:03.681 [2024-11-15 11:05:50.537116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:03.681 [2024-11-15 11:05:50.537128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.681 [2024-11-15 11:05:50.537156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.940 [2024-11-15 11:05:50.542248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.940 [2024-11-15 11:05:50.542421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:03.940 [2024-11-15 11:05:50.542456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.077 ms 00:22:03.940 [2024-11-15 11:05:50.542469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.542718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.542733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:03.941 [2024-11-15 11:05:50.542746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:22:03.941 [2024-11-15 11:05:50.542757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.545793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.545961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:03.941 [2024-11-15 11:05:50.545984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:22:03.941 [2024-11-15 11:05:50.545998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.551587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.551631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:03.941 [2024-11-15 11:05:50.551645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.554 ms 00:22:03.941 [2024-11-15 11:05:50.551656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.595146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.595192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:03.941 [2024-11-15 11:05:50.595208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.477 ms 00:22:03.941 [2024-11-15 11:05:50.595219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.616603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.616644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:03.941 [2024-11-15 11:05:50.616659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.372 ms 00:22:03.941 [2024-11-15 11:05:50.616671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.616819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.616841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:03.941 [2024-11-15 11:05:50.616853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:22:03.941 [2024-11-15 11:05:50.616864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.653473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.653537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:03.941 [2024-11-15 11:05:50.653561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.650 ms 00:22:03.941 [2024-11-15 11:05:50.653572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.689312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.689366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:03.941 [2024-11-15 11:05:50.689380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.757 ms 00:22:03.941 [2024-11-15 11:05:50.689392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.725678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.725717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:03.941 [2024-11-15 11:05:50.725731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.306 ms 00:22:03.941 [2024-11-15 11:05:50.725742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.761881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.941 [2024-11-15 11:05:50.761923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.941 [2024-11-15 11:05:50.761937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.115 ms 00:22:03.941 [2024-11-15 11:05:50.761948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.941 [2024-11-15 11:05:50.761989] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.941 [2024-11-15 11:05:50.762008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.941 [2024-11-15 11:05:50.762376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.762991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.942 [2024-11-15 11:05:50.763156] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.942 [2024-11-15 11:05:50.763172] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e67806c-8a2b-44dc-bcca-3a4948b5bfb5 00:22:03.943 [2024-11-15 11:05:50.763184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.943 [2024-11-15 11:05:50.763195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.943 [2024-11-15 11:05:50.763205] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.943 [2024-11-15 11:05:50.763217] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.943 [2024-11-15 11:05:50.763227] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.943 [2024-11-15 11:05:50.763238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.943 [2024-11-15 11:05:50.763261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.943 [2024-11-15 11:05:50.763271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.943 [2024-11-15 11:05:50.763280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.943 [2024-11-15 11:05:50.763291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.943 [2024-11-15 11:05:50.763302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.943 [2024-11-15 11:05:50.763314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:22:03.943 [2024-11-15 11:05:50.763324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.943 [2024-11-15 11:05:50.784588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.943 [2024-11-15 11:05:50.784624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.943 [2024-11-15 11:05:50.784638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.257 ms 00:22:03.943 [2024-11-15 11:05:50.784650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.943 [2024-11-15 11:05:50.785263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.943 [2024-11-15 11:05:50.785281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.943 [2024-11-15 11:05:50.785293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:22:03.943 [2024-11-15 11:05:50.785310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.202 [2024-11-15 11:05:50.839818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.202 [2024-11-15 11:05:50.840022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:04.202 [2024-11-15 11:05:50.840045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.202 [2024-11-15 11:05:50.840058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.202 [2024-11-15 11:05:50.840129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.202 [2024-11-15 11:05:50.840141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:04.202 [2024-11-15 11:05:50.840153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.202 [2024-11-15 11:05:50.840171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.202 [2024-11-15 11:05:50.840253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.202 [2024-11-15 11:05:50.840267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:04.202 [2024-11-15 11:05:50.840278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.202 [2024-11-15 11:05:50.840290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.202 [2024-11-15 11:05:50.840309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.202 [2024-11-15 11:05:50.840320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:04.202 [2024-11-15 11:05:50.840331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.202 [2024-11-15 11:05:50.840342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.202 [2024-11-15 11:05:50.976195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.202 [2024-11-15 11:05:50.976283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:04.202 [2024-11-15 11:05:50.976302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.202 [2024-11-15 11:05:50.976315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.083094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.083351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:04.462 [2024-11-15 11:05:51.083379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.083391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.083559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.083575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:04.462 [2024-11-15 11:05:51.083588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.083599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.083656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.083670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:04.462 [2024-11-15 11:05:51.083682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.083693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.083822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.083837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:04.462 [2024-11-15 11:05:51.083849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.083861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.083902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.083915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:04.462 [2024-11-15 11:05:51.083927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.083938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.083984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.084002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:04.462 [2024-11-15 11:05:51.084013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.084024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.084074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:04.462 [2024-11-15 11:05:51.084087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:04.462 [2024-11-15 11:05:51.084099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:04.462 [2024-11-15 11:05:51.084110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:04.462 [2024-11-15 11:05:51.084256] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.100 ms, result 0 00:22:05.399 00:22:05.399 00:22:05.399 11:05:52 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:07.301 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:07.301 11:05:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:07.301 [2024-11-15 11:05:54.090031] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:22:07.301 [2024-11-15 11:05:54.090158] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77477 ] 00:22:07.559 [2024-11-15 11:05:54.267405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.559 [2024-11-15 11:05:54.380799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.126 [2024-11-15 11:05:54.760290] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.126 [2024-11-15 11:05:54.760364] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:08.126 [2024-11-15 11:05:54.921464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.126 [2024-11-15 11:05:54.921521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:08.126 [2024-11-15 11:05:54.921563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.126 [2024-11-15 11:05:54.921573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.126 [2024-11-15 11:05:54.921635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.126 [2024-11-15 11:05:54.921648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:08.126 [2024-11-15 11:05:54.921662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:08.126 [2024-11-15 11:05:54.921671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.126 [2024-11-15 11:05:54.921693] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:08.126 [2024-11-15 11:05:54.922601] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:08.126 [2024-11-15 11:05:54.922630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.126 [2024-11-15 11:05:54.922642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:08.126 [2024-11-15 11:05:54.922653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:22:08.126 [2024-11-15 11:05:54.922663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.126 [2024-11-15 11:05:54.924124] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:08.126 [2024-11-15 11:05:54.943184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.126 [2024-11-15 11:05:54.943223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:08.127 [2024-11-15 11:05:54.943237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.090 ms 00:22:08.127 [2024-11-15 11:05:54.943247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.943311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.943325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:08.127 [2024-11-15 11:05:54.943336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:08.127 [2024-11-15 11:05:54.943345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.950140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.950295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:08.127 [2024-11-15 11:05:54.950315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.736 ms 00:22:08.127 [2024-11-15 11:05:54.950327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.950414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.950426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:08.127 [2024-11-15 11:05:54.950437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:08.127 [2024-11-15 11:05:54.950446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.950488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.950499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:08.127 [2024-11-15 11:05:54.950510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:08.127 [2024-11-15 11:05:54.950519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.950561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.127 [2024-11-15 11:05:54.955379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.955411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:08.127 [2024-11-15 11:05:54.955424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.831 ms 00:22:08.127 [2024-11-15 11:05:54.955438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.955467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.955478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:08.127 [2024-11-15 11:05:54.955489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:08.127 [2024-11-15 11:05:54.955499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.955567] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:08.127 [2024-11-15 11:05:54.955593] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:08.127 [2024-11-15 11:05:54.955629] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:08.127 [2024-11-15 11:05:54.955650] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:08.127 [2024-11-15 11:05:54.955739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:08.127 [2024-11-15 11:05:54.955752] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:08.127 [2024-11-15 11:05:54.955765] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:08.127 [2024-11-15 11:05:54.955778] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:08.127 [2024-11-15 11:05:54.955790] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:08.127 [2024-11-15 11:05:54.955802] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:08.127 [2024-11-15 11:05:54.955812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:08.127 [2024-11-15 11:05:54.955821] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:08.127 [2024-11-15 11:05:54.955832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:08.127 [2024-11-15 11:05:54.955846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.955857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:08.127 [2024-11-15 11:05:54.955867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:22:08.127 [2024-11-15 11:05:54.955877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.955951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.127 [2024-11-15 11:05:54.955973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:08.127 [2024-11-15 11:05:54.955984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:08.127 [2024-11-15 11:05:54.955993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.127 [2024-11-15 11:05:54.956086] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:08.127 [2024-11-15 11:05:54.956103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:08.127 [2024-11-15 11:05:54.956114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:08.127 [2024-11-15 11:05:54.956144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:08.127 [2024-11-15 11:05:54.956174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.127 [2024-11-15 11:05:54.956193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:08.127 [2024-11-15 11:05:54.956202] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:08.127 [2024-11-15 11:05:54.956211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:08.127 [2024-11-15 11:05:54.956221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:08.127 [2024-11-15 11:05:54.956231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:08.127 [2024-11-15 11:05:54.956250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:08.127 [2024-11-15 11:05:54.956269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:08.127 [2024-11-15 11:05:54.956298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:08.127 [2024-11-15 11:05:54.956325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:08.127 [2024-11-15 11:05:54.956353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:08.127 [2024-11-15 11:05:54.956380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:08.127 [2024-11-15 11:05:54.956406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.127 [2024-11-15 11:05:54.956425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:08.127 [2024-11-15 11:05:54.956434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:08.127 [2024-11-15 11:05:54.956442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:08.127 [2024-11-15 11:05:54.956451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:08.127 [2024-11-15 11:05:54.956460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:08.127 [2024-11-15 11:05:54.956469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:08.127 [2024-11-15 11:05:54.956487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:08.127 [2024-11-15 11:05:54.956497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956506] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:08.127 [2024-11-15 11:05:54.956516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:08.127 [2024-11-15 11:05:54.956538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:08.127 [2024-11-15 11:05:54.956560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:08.127 [2024-11-15 11:05:54.956570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:08.127 [2024-11-15 11:05:54.956579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:08.127 [2024-11-15 11:05:54.956588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:08.127 [2024-11-15 11:05:54.956598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:08.127 [2024-11-15 11:05:54.956607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:08.127 [2024-11-15 11:05:54.956617] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:08.127 [2024-11-15 11:05:54.956629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.127 [2024-11-15 11:05:54.956641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:08.127 [2024-11-15 11:05:54.956651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:08.128 [2024-11-15 11:05:54.956662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:08.128 [2024-11-15 11:05:54.956672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:08.128 [2024-11-15 11:05:54.956682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:08.128 [2024-11-15 11:05:54.956703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:08.128 [2024-11-15 11:05:54.956714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:08.128 [2024-11-15 11:05:54.956724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:08.128 [2024-11-15 11:05:54.956734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:08.128 [2024-11-15 11:05:54.956744] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:08.128 [2024-11-15 11:05:54.956753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:08.128 [2024-11-15 11:05:54.956763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:08.128 [2024-11-15 11:05:54.956773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:08.128 [2024-11-15 11:05:54.956783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:08.128 [2024-11-15 11:05:54.956792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:08.128 [2024-11-15 11:05:54.956807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:08.128 [2024-11-15 11:05:54.956818] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:08.128 [2024-11-15 11:05:54.956828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:08.128 [2024-11-15 11:05:54.956838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:08.128 [2024-11-15 11:05:54.956849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:08.128 [2024-11-15 11:05:54.956859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.128 [2024-11-15 11:05:54.956869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:08.128 [2024-11-15 11:05:54.956879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:22:08.128 [2024-11-15 11:05:54.956889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.386 [2024-11-15 11:05:54.996961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.386 [2024-11-15 11:05:54.997145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:08.386 [2024-11-15 11:05:54.997231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.092 ms 00:22:08.386 [2024-11-15 11:05:54.997270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.386 [2024-11-15 11:05:54.997382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.386 [2024-11-15 11:05:54.997414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:08.386 [2024-11-15 11:05:54.997445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:08.386 [2024-11-15 11:05:54.997557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.386 [2024-11-15 11:05:55.050242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.386 [2024-11-15 11:05:55.050397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:08.386 [2024-11-15 11:05:55.050480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.671 ms 00:22:08.386 [2024-11-15 11:05:55.050517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.386 [2024-11-15 11:05:55.050593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.386 [2024-11-15 11:05:55.050626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:08.386 [2024-11-15 11:05:55.050657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:08.386 [2024-11-15 11:05:55.050751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.386 [2024-11-15 11:05:55.051282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.386 [2024-11-15 11:05:55.051407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:08.386 [2024-11-15 11:05:55.051478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:22:08.387 [2024-11-15 11:05:55.051514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.051751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.051788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:08.387 [2024-11-15 11:05:55.051819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:08.387 [2024-11-15 11:05:55.051857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.071292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.071426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:08.387 [2024-11-15 11:05:55.071505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.370 ms 00:22:08.387 [2024-11-15 11:05:55.071563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.089925] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:08.387 [2024-11-15 11:05:55.090079] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:08.387 [2024-11-15 11:05:55.090204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.090238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:08.387 [2024-11-15 11:05:55.090270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.539 ms 00:22:08.387 [2024-11-15 11:05:55.090299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.120236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.120400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:08.387 [2024-11-15 11:05:55.120542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.929 ms 00:22:08.387 [2024-11-15 11:05:55.120583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.139414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.139567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:08.387 [2024-11-15 11:05:55.139652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.781 ms 00:22:08.387 [2024-11-15 11:05:55.139667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.157935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.157974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:08.387 [2024-11-15 11:05:55.157987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.259 ms 00:22:08.387 [2024-11-15 11:05:55.157997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.158826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.387 [2024-11-15 11:05:55.158857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:08.387 [2024-11-15 11:05:55.158869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:22:08.387 [2024-11-15 11:05:55.158883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.387 [2024-11-15 11:05:55.245299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.245510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:08.645 [2024-11-15 11:05:55.245561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.533 ms 00:22:08.645 [2024-11-15 11:05:55.245573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.256701] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:08.645 [2024-11-15 11:05:55.259756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.259893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:08.645 [2024-11-15 11:05:55.259915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.053 ms 00:22:08.645 [2024-11-15 11:05:55.259926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.260032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.260046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:08.645 [2024-11-15 11:05:55.260057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:08.645 [2024-11-15 11:05:55.260071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.260165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.260177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:08.645 [2024-11-15 11:05:55.260188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:08.645 [2024-11-15 11:05:55.260198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.260222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.260234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:08.645 [2024-11-15 11:05:55.260244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:08.645 [2024-11-15 11:05:55.260254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.260286] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:08.645 [2024-11-15 11:05:55.260300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.260310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:08.645 [2024-11-15 11:05:55.260322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:22:08.645 [2024-11-15 11:05:55.260331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.296283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.296326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.645 [2024-11-15 11:05:55.296341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.987 ms 00:22:08.645 [2024-11-15 11:05:55.296358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.296435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.645 [2024-11-15 11:05:55.296447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.645 [2024-11-15 11:05:55.296459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:08.645 [2024-11-15 11:05:55.296469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.645 [2024-11-15 11:05:55.297669] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.308 ms, result 0 00:22:09.580  [2024-11-15T11:05:57.375Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-15T11:05:58.403Z] Copying: 47/1024 [MB] (23 MBps) [2024-11-15T11:05:59.339Z] Copying: 72/1024 [MB] (24 MBps) [2024-11-15T11:06:00.716Z] Copying: 98/1024 [MB] (26 MBps) [2024-11-15T11:06:01.653Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-15T11:06:02.590Z] Copying: 147/1024 [MB] (23 MBps) [2024-11-15T11:06:03.527Z] Copying: 169/1024 [MB] (22 MBps) [2024-11-15T11:06:04.464Z] Copying: 192/1024 [MB] (22 MBps) [2024-11-15T11:06:05.401Z] Copying: 217/1024 [MB] (24 MBps) [2024-11-15T11:06:06.339Z] Copying: 242/1024 [MB] (25 MBps) [2024-11-15T11:06:07.722Z] Copying: 266/1024 [MB] (23 MBps) [2024-11-15T11:06:08.656Z] Copying: 288/1024 [MB] (22 MBps) [2024-11-15T11:06:09.591Z] Copying: 311/1024 [MB] (22 MBps) [2024-11-15T11:06:10.529Z] Copying: 334/1024 [MB] (22 MBps) [2024-11-15T11:06:11.467Z] Copying: 356/1024 [MB] (22 MBps) [2024-11-15T11:06:12.404Z] Copying: 378/1024 [MB] (22 MBps) [2024-11-15T11:06:13.351Z] Copying: 401/1024 [MB] (22 MBps) [2024-11-15T11:06:14.308Z] Copying: 424/1024 [MB] (23 MBps) [2024-11-15T11:06:15.684Z] Copying: 448/1024 [MB] (23 MBps) [2024-11-15T11:06:16.621Z] Copying: 472/1024 [MB] (23 MBps) [2024-11-15T11:06:17.558Z] Copying: 495/1024 [MB] (22 MBps) [2024-11-15T11:06:18.494Z] Copying: 518/1024 [MB] (22 MBps) [2024-11-15T11:06:19.431Z] Copying: 542/1024 [MB] (23 MBps) [2024-11-15T11:06:20.368Z] Copying: 564/1024 [MB] (22 MBps) [2024-11-15T11:06:21.306Z] Copying: 587/1024 [MB] (22 MBps) [2024-11-15T11:06:22.689Z] Copying: 609/1024 [MB] (22 MBps) [2024-11-15T11:06:23.631Z] Copying: 632/1024 [MB] (22 MBps) [2024-11-15T11:06:24.565Z] Copying: 654/1024 [MB] (22 MBps) [2024-11-15T11:06:25.502Z] Copying: 678/1024 [MB] (23 MBps) [2024-11-15T11:06:26.439Z] Copying: 701/1024 [MB] (23 MBps) [2024-11-15T11:06:27.374Z] Copying: 725/1024 [MB] (23 MBps) [2024-11-15T11:06:28.308Z] Copying: 748/1024 [MB] (22 MBps) [2024-11-15T11:06:29.687Z] Copying: 771/1024 [MB] (23 MBps) [2024-11-15T11:06:30.256Z] Copying: 794/1024 [MB] (23 MBps) [2024-11-15T11:06:31.656Z] Copying: 818/1024 [MB] (24 MBps) [2024-11-15T11:06:32.592Z] Copying: 842/1024 [MB] (24 MBps) [2024-11-15T11:06:33.527Z] Copying: 866/1024 [MB] (23 MBps) [2024-11-15T11:06:34.463Z] Copying: 889/1024 [MB] (23 MBps) [2024-11-15T11:06:35.401Z] Copying: 912/1024 [MB] (22 MBps) [2024-11-15T11:06:36.337Z] Copying: 935/1024 [MB] (22 MBps) [2024-11-15T11:06:37.274Z] Copying: 957/1024 [MB] (22 MBps) [2024-11-15T11:06:38.660Z] Copying: 979/1024 [MB] (22 MBps) [2024-11-15T11:06:39.596Z] Copying: 1002/1024 [MB] (22 MBps) [2024-11-15T11:06:39.856Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-15T11:06:39.856Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-15 11:06:39.826263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.995 [2024-11-15 11:06:39.826342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:52.995 [2024-11-15 11:06:39.826360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:52.995 [2024-11-15 11:06:39.826378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.995 [2024-11-15 11:06:39.827895] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:52.995 [2024-11-15 11:06:39.833919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.995 [2024-11-15 11:06:39.833961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:52.995 [2024-11-15 11:06:39.833975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.994 ms 00:22:52.995 [2024-11-15 11:06:39.833985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.995 [2024-11-15 11:06:39.845499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.995 [2024-11-15 11:06:39.845572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:52.995 [2024-11-15 11:06:39.845586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.078 ms 00:22:52.995 [2024-11-15 11:06:39.845597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.254 [2024-11-15 11:06:39.869106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.254 [2024-11-15 11:06:39.869149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.254 [2024-11-15 11:06:39.869163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.522 ms 00:22:53.254 [2024-11-15 11:06:39.869175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.254 [2024-11-15 11:06:39.874205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.254 [2024-11-15 11:06:39.874241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:53.254 [2024-11-15 11:06:39.874253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:22:53.254 [2024-11-15 11:06:39.874263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.254 [2024-11-15 11:06:39.911942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.254 [2024-11-15 11:06:39.911983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.254 [2024-11-15 11:06:39.911997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.693 ms 00:22:53.254 [2024-11-15 11:06:39.912008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.254 [2024-11-15 11:06:39.933628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.254 [2024-11-15 11:06:39.933673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.254 [2024-11-15 11:06:39.933686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.617 ms 00:22:53.254 [2024-11-15 11:06:39.933697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.254 [2024-11-15 11:06:40.057572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.254 [2024-11-15 11:06:40.057628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.254 [2024-11-15 11:06:40.057643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 124.032 ms 00:22:53.254 [2024-11-15 11:06:40.057654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.254 [2024-11-15 11:06:40.096341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.254 [2024-11-15 11:06:40.096544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:53.254 [2024-11-15 11:06:40.096567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.728 ms 00:22:53.254 [2024-11-15 11:06:40.096577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.514 [2024-11-15 11:06:40.132606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.514 [2024-11-15 11:06:40.132660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:53.514 [2024-11-15 11:06:40.132673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.008 ms 00:22:53.514 [2024-11-15 11:06:40.132684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.514 [2024-11-15 11:06:40.168057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.514 [2024-11-15 11:06:40.168101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.514 [2024-11-15 11:06:40.168115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.390 ms 00:22:53.514 [2024-11-15 11:06:40.168125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.514 [2024-11-15 11:06:40.203050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.514 [2024-11-15 11:06:40.203092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:53.514 [2024-11-15 11:06:40.203105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.894 ms 00:22:53.514 [2024-11-15 11:06:40.203115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.514 [2024-11-15 11:06:40.203154] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:53.514 [2024-11-15 11:06:40.203171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 104448 / 261120 wr_cnt: 1 state: open 00:22:53.514 [2024-11-15 11:06:40.203185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:53.514 [2024-11-15 11:06:40.203196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:53.514 [2024-11-15 11:06:40.203207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.203996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:53.515 [2024-11-15 11:06:40.204157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:53.516 [2024-11-15 11:06:40.204260] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:53.516 [2024-11-15 11:06:40.204270] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e67806c-8a2b-44dc-bcca-3a4948b5bfb5 00:22:53.516 [2024-11-15 11:06:40.204281] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 104448 00:22:53.516 [2024-11-15 11:06:40.204292] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 105408 00:22:53.516 [2024-11-15 11:06:40.204301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 104448 00:22:53.516 [2024-11-15 11:06:40.204311] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0092 00:22:53.516 [2024-11-15 11:06:40.204321] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:53.516 [2024-11-15 11:06:40.204337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:53.516 [2024-11-15 11:06:40.204358] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:53.516 [2024-11-15 11:06:40.204368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:53.516 [2024-11-15 11:06:40.204377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:53.516 [2024-11-15 11:06:40.204387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.516 [2024-11-15 11:06:40.204396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:53.516 [2024-11-15 11:06:40.204406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.236 ms 00:22:53.516 [2024-11-15 11:06:40.204416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.516 [2024-11-15 11:06:40.224294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.516 [2024-11-15 11:06:40.224338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:53.516 [2024-11-15 11:06:40.224350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.850 ms 00:22:53.516 [2024-11-15 11:06:40.224365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.516 [2024-11-15 11:06:40.224940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.516 [2024-11-15 11:06:40.224965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:53.516 [2024-11-15 11:06:40.224976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:22:53.516 [2024-11-15 11:06:40.224986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.516 [2024-11-15 11:06:40.277234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.516 [2024-11-15 11:06:40.277273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:53.516 [2024-11-15 11:06:40.277291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.516 [2024-11-15 11:06:40.277302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.516 [2024-11-15 11:06:40.277357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.516 [2024-11-15 11:06:40.277368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:53.516 [2024-11-15 11:06:40.277378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.516 [2024-11-15 11:06:40.277388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.516 [2024-11-15 11:06:40.277474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.516 [2024-11-15 11:06:40.277489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:53.516 [2024-11-15 11:06:40.277501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.516 [2024-11-15 11:06:40.277516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.516 [2024-11-15 11:06:40.277562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.516 [2024-11-15 11:06:40.277574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:53.516 [2024-11-15 11:06:40.277584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.516 [2024-11-15 11:06:40.277594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.403439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.403682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:53.776 [2024-11-15 11:06:40.403715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.403726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:53.776 [2024-11-15 11:06:40.503330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:53.776 [2024-11-15 11:06:40.503458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:53.776 [2024-11-15 11:06:40.503544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:53.776 [2024-11-15 11:06:40.503683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:53.776 [2024-11-15 11:06:40.503753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:53.776 [2024-11-15 11:06:40.503833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.503888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:53.776 [2024-11-15 11:06:40.503901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:53.776 [2024-11-15 11:06:40.503911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:53.776 [2024-11-15 11:06:40.503920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.776 [2024-11-15 11:06:40.504079] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 680.608 ms, result 0 00:22:55.155 00:22:55.155 00:22:55.414 11:06:42 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:55.414 [2024-11-15 11:06:42.129458] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:22:55.414 [2024-11-15 11:06:42.129636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77967 ] 00:22:55.673 [2024-11-15 11:06:42.313958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.673 [2024-11-15 11:06:42.432139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.241 [2024-11-15 11:06:42.809256] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:56.241 [2024-11-15 11:06:42.809331] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:56.241 [2024-11-15 11:06:42.970245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.241 [2024-11-15 11:06:42.970303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:56.241 [2024-11-15 11:06:42.970325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:56.241 [2024-11-15 11:06:42.970336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.241 [2024-11-15 11:06:42.970384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.241 [2024-11-15 11:06:42.970397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:56.241 [2024-11-15 11:06:42.970411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:56.241 [2024-11-15 11:06:42.970421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.241 [2024-11-15 11:06:42.970443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:56.241 [2024-11-15 11:06:42.971398] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:56.241 [2024-11-15 11:06:42.971427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.241 [2024-11-15 11:06:42.971439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:56.241 [2024-11-15 11:06:42.971450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:22:56.241 [2024-11-15 11:06:42.971460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.241 [2024-11-15 11:06:42.972958] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:56.241 [2024-11-15 11:06:42.992470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.241 [2024-11-15 11:06:42.992511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:56.241 [2024-11-15 11:06:42.992546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.545 ms 00:22:56.241 [2024-11-15 11:06:42.992558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.241 [2024-11-15 11:06:42.992640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.241 [2024-11-15 11:06:42.992658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:56.241 [2024-11-15 11:06:42.992671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:56.241 [2024-11-15 11:06:42.992682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:42.999591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:42.999622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:56.242 [2024-11-15 11:06:42.999635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.845 ms 00:22:56.242 [2024-11-15 11:06:42.999646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:42.999727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:42.999741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:56.242 [2024-11-15 11:06:42.999753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:56.242 [2024-11-15 11:06:42.999763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:42.999804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:42.999815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:56.242 [2024-11-15 11:06:42.999826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:56.242 [2024-11-15 11:06:42.999836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:42.999861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:56.242 [2024-11-15 11:06:43.004780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:43.004815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:56.242 [2024-11-15 11:06:43.004828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:22:56.242 [2024-11-15 11:06:43.004842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:43.004873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:43.004883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:56.242 [2024-11-15 11:06:43.004895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:56.242 [2024-11-15 11:06:43.004905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:43.004960] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:56.242 [2024-11-15 11:06:43.005004] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:56.242 [2024-11-15 11:06:43.005045] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:56.242 [2024-11-15 11:06:43.005068] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:56.242 [2024-11-15 11:06:43.005159] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:56.242 [2024-11-15 11:06:43.005173] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:56.242 [2024-11-15 11:06:43.005185] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:56.242 [2024-11-15 11:06:43.005199] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005211] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:56.242 [2024-11-15 11:06:43.005242] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:56.242 [2024-11-15 11:06:43.005253] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:56.242 [2024-11-15 11:06:43.005262] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:56.242 [2024-11-15 11:06:43.005277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:43.005287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:56.242 [2024-11-15 11:06:43.005298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:22:56.242 [2024-11-15 11:06:43.005308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:43.005382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.242 [2024-11-15 11:06:43.005392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:56.242 [2024-11-15 11:06:43.005404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:56.242 [2024-11-15 11:06:43.005414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.242 [2024-11-15 11:06:43.005509] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:56.242 [2024-11-15 11:06:43.005553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:56.242 [2024-11-15 11:06:43.005566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:56.242 [2024-11-15 11:06:43.005596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:56.242 [2024-11-15 11:06:43.005628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.242 [2024-11-15 11:06:43.005648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:56.242 [2024-11-15 11:06:43.005678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:56.242 [2024-11-15 11:06:43.005688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:56.242 [2024-11-15 11:06:43.005697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:56.242 [2024-11-15 11:06:43.005707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:56.242 [2024-11-15 11:06:43.005726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:56.242 [2024-11-15 11:06:43.005746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:56.242 [2024-11-15 11:06:43.005775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:56.242 [2024-11-15 11:06:43.005803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:56.242 [2024-11-15 11:06:43.005836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:56.242 [2024-11-15 11:06:43.005864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:56.242 [2024-11-15 11:06:43.005883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:56.242 [2024-11-15 11:06:43.005892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.242 [2024-11-15 11:06:43.005910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:56.242 [2024-11-15 11:06:43.005920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:56.242 [2024-11-15 11:06:43.005930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:56.242 [2024-11-15 11:06:43.005938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:56.242 [2024-11-15 11:06:43.005948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:56.242 [2024-11-15 11:06:43.005957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:56.242 [2024-11-15 11:06:43.005975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:56.242 [2024-11-15 11:06:43.005984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.005994] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:56.242 [2024-11-15 11:06:43.006004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:56.242 [2024-11-15 11:06:43.006013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:56.242 [2024-11-15 11:06:43.006023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:56.242 [2024-11-15 11:06:43.006033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:56.242 [2024-11-15 11:06:43.006042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:56.242 [2024-11-15 11:06:43.006052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:56.242 [2024-11-15 11:06:43.006061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:56.242 [2024-11-15 11:06:43.006070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:56.242 [2024-11-15 11:06:43.006080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:56.242 [2024-11-15 11:06:43.006090] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:56.242 [2024-11-15 11:06:43.006102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.242 [2024-11-15 11:06:43.006113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:56.242 [2024-11-15 11:06:43.006123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:56.242 [2024-11-15 11:06:43.006133] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:56.243 [2024-11-15 11:06:43.006143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:56.243 [2024-11-15 11:06:43.006153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:56.243 [2024-11-15 11:06:43.006163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:56.243 [2024-11-15 11:06:43.006173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:56.243 [2024-11-15 11:06:43.006183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:56.243 [2024-11-15 11:06:43.006193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:56.243 [2024-11-15 11:06:43.006202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:56.243 [2024-11-15 11:06:43.006212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:56.243 [2024-11-15 11:06:43.006224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:56.243 [2024-11-15 11:06:43.006238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:56.243 [2024-11-15 11:06:43.006249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:56.243 [2024-11-15 11:06:43.006259] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:56.243 [2024-11-15 11:06:43.006273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:56.243 [2024-11-15 11:06:43.006285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:56.243 [2024-11-15 11:06:43.006296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:56.243 [2024-11-15 11:06:43.006307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:56.243 [2024-11-15 11:06:43.006318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:56.243 [2024-11-15 11:06:43.006328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.243 [2024-11-15 11:06:43.006339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:56.243 [2024-11-15 11:06:43.006350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:22:56.243 [2024-11-15 11:06:43.006360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.243 [2024-11-15 11:06:43.048944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.243 [2024-11-15 11:06:43.049129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:56.243 [2024-11-15 11:06:43.049152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.606 ms 00:22:56.243 [2024-11-15 11:06:43.049165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.243 [2024-11-15 11:06:43.049258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.243 [2024-11-15 11:06:43.049270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:56.243 [2024-11-15 11:06:43.049281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:56.243 [2024-11-15 11:06:43.049292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.502 [2024-11-15 11:06:43.132009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.502 [2024-11-15 11:06:43.132051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:56.502 [2024-11-15 11:06:43.132066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.783 ms 00:22:56.502 [2024-11-15 11:06:43.132078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.502 [2024-11-15 11:06:43.132130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.502 [2024-11-15 11:06:43.132143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:56.502 [2024-11-15 11:06:43.132154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:56.502 [2024-11-15 11:06:43.132169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.502 [2024-11-15 11:06:43.132703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.502 [2024-11-15 11:06:43.132719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:56.502 [2024-11-15 11:06:43.132730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:22:56.502 [2024-11-15 11:06:43.132741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.502 [2024-11-15 11:06:43.132883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.502 [2024-11-15 11:06:43.132899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:56.502 [2024-11-15 11:06:43.132910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:22:56.502 [2024-11-15 11:06:43.132927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.502 [2024-11-15 11:06:43.151097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.502 [2024-11-15 11:06:43.151133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:56.502 [2024-11-15 11:06:43.151151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.159 ms 00:22:56.503 [2024-11-15 11:06:43.151161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.170307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:56.503 [2024-11-15 11:06:43.170360] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:56.503 [2024-11-15 11:06:43.170376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.170386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:56.503 [2024-11-15 11:06:43.170398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.138 ms 00:22:56.503 [2024-11-15 11:06:43.170408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.201108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.201154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:56.503 [2024-11-15 11:06:43.201168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.706 ms 00:22:56.503 [2024-11-15 11:06:43.201180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.219502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.219557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:56.503 [2024-11-15 11:06:43.219571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.301 ms 00:22:56.503 [2024-11-15 11:06:43.219581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.237556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.237591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:56.503 [2024-11-15 11:06:43.237604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.965 ms 00:22:56.503 [2024-11-15 11:06:43.237614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.238434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.238458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:56.503 [2024-11-15 11:06:43.238471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:22:56.503 [2024-11-15 11:06:43.238484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.327293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.327362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:56.503 [2024-11-15 11:06:43.327385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.931 ms 00:22:56.503 [2024-11-15 11:06:43.327396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.338023] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:56.503 [2024-11-15 11:06:43.340903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.340934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:56.503 [2024-11-15 11:06:43.340948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.475 ms 00:22:56.503 [2024-11-15 11:06:43.340959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.341049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.341063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:56.503 [2024-11-15 11:06:43.341076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:56.503 [2024-11-15 11:06:43.341090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.342714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.342845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:56.503 [2024-11-15 11:06:43.342921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.566 ms 00:22:56.503 [2024-11-15 11:06:43.342956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.343013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.343045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:56.503 [2024-11-15 11:06:43.343120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:56.503 [2024-11-15 11:06:43.343155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.503 [2024-11-15 11:06:43.343220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:56.503 [2024-11-15 11:06:43.343257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.503 [2024-11-15 11:06:43.343326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:56.503 [2024-11-15 11:06:43.343361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:56.503 [2024-11-15 11:06:43.343433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.765 [2024-11-15 11:06:43.379579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.765 [2024-11-15 11:06:43.379633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:56.765 [2024-11-15 11:06:43.379648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.167 ms 00:22:56.765 [2024-11-15 11:06:43.379665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.765 [2024-11-15 11:06:43.379738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:56.765 [2024-11-15 11:06:43.379750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:56.765 [2024-11-15 11:06:43.379762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:22:56.765 [2024-11-15 11:06:43.379772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:56.765 [2024-11-15 11:06:43.380852] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.782 ms, result 0 00:22:58.142  [2024-11-15T11:06:45.938Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-15T11:06:46.873Z] Copying: 43/1024 [MB] (23 MBps) [2024-11-15T11:06:47.809Z] Copying: 67/1024 [MB] (23 MBps) [2024-11-15T11:06:48.746Z] Copying: 90/1024 [MB] (23 MBps) [2024-11-15T11:06:49.683Z] Copying: 115/1024 [MB] (24 MBps) [2024-11-15T11:06:50.620Z] Copying: 140/1024 [MB] (25 MBps) [2024-11-15T11:06:52.000Z] Copying: 165/1024 [MB] (24 MBps) [2024-11-15T11:06:52.938Z] Copying: 189/1024 [MB] (24 MBps) [2024-11-15T11:06:53.878Z] Copying: 213/1024 [MB] (23 MBps) [2024-11-15T11:06:54.815Z] Copying: 238/1024 [MB] (25 MBps) [2024-11-15T11:06:55.784Z] Copying: 263/1024 [MB] (25 MBps) [2024-11-15T11:06:56.805Z] Copying: 288/1024 [MB] (25 MBps) [2024-11-15T11:06:57.744Z] Copying: 314/1024 [MB] (25 MBps) [2024-11-15T11:06:58.680Z] Copying: 339/1024 [MB] (25 MBps) [2024-11-15T11:06:59.616Z] Copying: 364/1024 [MB] (25 MBps) [2024-11-15T11:07:00.992Z] Copying: 390/1024 [MB] (25 MBps) [2024-11-15T11:07:01.928Z] Copying: 415/1024 [MB] (24 MBps) [2024-11-15T11:07:02.864Z] Copying: 439/1024 [MB] (24 MBps) [2024-11-15T11:07:03.799Z] Copying: 464/1024 [MB] (24 MBps) [2024-11-15T11:07:04.735Z] Copying: 489/1024 [MB] (24 MBps) [2024-11-15T11:07:05.670Z] Copying: 513/1024 [MB] (24 MBps) [2024-11-15T11:07:06.605Z] Copying: 538/1024 [MB] (25 MBps) [2024-11-15T11:07:07.981Z] Copying: 563/1024 [MB] (25 MBps) [2024-11-15T11:07:08.916Z] Copying: 588/1024 [MB] (24 MBps) [2024-11-15T11:07:09.851Z] Copying: 613/1024 [MB] (24 MBps) [2024-11-15T11:07:10.785Z] Copying: 638/1024 [MB] (25 MBps) [2024-11-15T11:07:11.719Z] Copying: 663/1024 [MB] (25 MBps) [2024-11-15T11:07:12.660Z] Copying: 689/1024 [MB] (25 MBps) [2024-11-15T11:07:13.599Z] Copying: 715/1024 [MB] (25 MBps) [2024-11-15T11:07:14.977Z] Copying: 740/1024 [MB] (25 MBps) [2024-11-15T11:07:15.913Z] Copying: 766/1024 [MB] (25 MBps) [2024-11-15T11:07:16.862Z] Copying: 791/1024 [MB] (25 MBps) [2024-11-15T11:07:17.799Z] Copying: 817/1024 [MB] (26 MBps) [2024-11-15T11:07:18.736Z] Copying: 843/1024 [MB] (26 MBps) [2024-11-15T11:07:19.672Z] Copying: 870/1024 [MB] (26 MBps) [2024-11-15T11:07:20.610Z] Copying: 895/1024 [MB] (25 MBps) [2024-11-15T11:07:21.987Z] Copying: 920/1024 [MB] (25 MBps) [2024-11-15T11:07:22.555Z] Copying: 945/1024 [MB] (24 MBps) [2024-11-15T11:07:23.934Z] Copying: 969/1024 [MB] (24 MBps) [2024-11-15T11:07:24.881Z] Copying: 995/1024 [MB] (25 MBps) [2024-11-15T11:07:24.881Z] Copying: 1021/1024 [MB] (26 MBps) [2024-11-15T11:07:24.881Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-15 11:07:24.649188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.649259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.020 [2024-11-15 11:07:24.649278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:38.020 [2024-11-15 11:07:24.649289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.649317] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:38.020 [2024-11-15 11:07:24.653685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.653723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.020 [2024-11-15 11:07:24.653737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:23:38.020 [2024-11-15 11:07:24.653749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.653995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.654009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.020 [2024-11-15 11:07:24.654021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:23:38.020 [2024-11-15 11:07:24.654031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.658979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.659019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.020 [2024-11-15 11:07:24.659032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.935 ms 00:23:38.020 [2024-11-15 11:07:24.659042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.664098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.664547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.020 [2024-11-15 11:07:24.664570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.029 ms 00:23:38.020 [2024-11-15 11:07:24.664582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.703292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.703349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.020 [2024-11-15 11:07:24.703367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.695 ms 00:23:38.020 [2024-11-15 11:07:24.703377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.725477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.725556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.020 [2024-11-15 11:07:24.725573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.084 ms 00:23:38.020 [2024-11-15 11:07:24.725584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.020 [2024-11-15 11:07:24.865286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.020 [2024-11-15 11:07:24.865383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:38.020 [2024-11-15 11:07:24.865403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 139.872 ms 00:23:38.020 [2024-11-15 11:07:24.865414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.283 [2024-11-15 11:07:24.902762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.283 [2024-11-15 11:07:24.902994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:38.283 [2024-11-15 11:07:24.903019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.386 ms 00:23:38.283 [2024-11-15 11:07:24.903030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.283 [2024-11-15 11:07:24.939734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.283 [2024-11-15 11:07:24.939793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:38.283 [2024-11-15 11:07:24.939823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.715 ms 00:23:38.283 [2024-11-15 11:07:24.939834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.283 [2024-11-15 11:07:24.975978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.283 [2024-11-15 11:07:24.976034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:38.283 [2024-11-15 11:07:24.976051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.148 ms 00:23:38.283 [2024-11-15 11:07:24.976062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.283 [2024-11-15 11:07:25.013297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.283 [2024-11-15 11:07:25.013357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:38.283 [2024-11-15 11:07:25.013375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.194 ms 00:23:38.283 [2024-11-15 11:07:25.013386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.283 [2024-11-15 11:07:25.013446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:38.283 [2024-11-15 11:07:25.013466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:38.283 [2024-11-15 11:07:25.013480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.283 [2024-11-15 11:07:25.013580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.013996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:38.284 [2024-11-15 11:07:25.014432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:38.285 [2024-11-15 11:07:25.014632] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:38.285 [2024-11-15 11:07:25.014643] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8e67806c-8a2b-44dc-bcca-3a4948b5bfb5 00:23:38.285 [2024-11-15 11:07:25.014655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:38.285 [2024-11-15 11:07:25.014665] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 27584 00:23:38.285 [2024-11-15 11:07:25.014675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 26624 00:23:38.285 [2024-11-15 11:07:25.014687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0361 00:23:38.285 [2024-11-15 11:07:25.014697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:38.285 [2024-11-15 11:07:25.014713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:38.285 [2024-11-15 11:07:25.014723] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:38.285 [2024-11-15 11:07:25.014743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:38.285 [2024-11-15 11:07:25.014753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:38.285 [2024-11-15 11:07:25.014762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.285 [2024-11-15 11:07:25.014774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:38.285 [2024-11-15 11:07:25.014785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:23:38.285 [2024-11-15 11:07:25.014797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.285 [2024-11-15 11:07:25.034353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.285 [2024-11-15 11:07:25.034404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:38.285 [2024-11-15 11:07:25.034419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.540 ms 00:23:38.285 [2024-11-15 11:07:25.034436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.285 [2024-11-15 11:07:25.034985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.285 [2024-11-15 11:07:25.035002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:38.285 [2024-11-15 11:07:25.035014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.514 ms 00:23:38.285 [2024-11-15 11:07:25.035024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.285 [2024-11-15 11:07:25.086742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.285 [2024-11-15 11:07:25.086801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.285 [2024-11-15 11:07:25.086823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.285 [2024-11-15 11:07:25.086834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.285 [2024-11-15 11:07:25.086911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.285 [2024-11-15 11:07:25.086922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.285 [2024-11-15 11:07:25.086933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.285 [2024-11-15 11:07:25.086944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.285 [2024-11-15 11:07:25.087048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.285 [2024-11-15 11:07:25.087061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.285 [2024-11-15 11:07:25.087072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.285 [2024-11-15 11:07:25.087087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.285 [2024-11-15 11:07:25.087105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.285 [2024-11-15 11:07:25.087116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.285 [2024-11-15 11:07:25.087127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.285 [2024-11-15 11:07:25.087136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.210581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.210650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.545 [2024-11-15 11:07:25.210671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.210683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.312453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.312537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.545 [2024-11-15 11:07:25.312555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.312566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.312668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.312697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.545 [2024-11-15 11:07:25.312709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.312719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.312772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.312784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.545 [2024-11-15 11:07:25.312795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.312805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.312909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.312922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.545 [2024-11-15 11:07:25.312946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.312956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.312998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.313009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.545 [2024-11-15 11:07:25.313020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.313029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.313067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.313081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.545 [2024-11-15 11:07:25.313091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.313102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.313147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.545 [2024-11-15 11:07:25.313159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.545 [2024-11-15 11:07:25.313169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.545 [2024-11-15 11:07:25.313180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.545 [2024-11-15 11:07:25.313303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 665.157 ms, result 0 00:23:39.482 00:23:39.482 00:23:39.740 11:07:26 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:41.666 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76291 00:23:41.666 11:07:28 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 76291 ']' 00:23:41.666 11:07:28 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 76291 00:23:41.666 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76291) - No such process 00:23:41.666 Process with pid 76291 is not found 00:23:41.666 11:07:28 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 76291 is not found' 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:41.666 Remove shared memory files 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:41.666 11:07:28 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:41.666 ************************************ 00:23:41.666 END TEST ftl_restore 00:23:41.666 ************************************ 00:23:41.666 00:23:41.666 real 3m26.883s 00:23:41.666 user 3m13.786s 00:23:41.666 sys 0m14.508s 00:23:41.666 11:07:28 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.666 11:07:28 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:41.666 11:07:28 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:41.666 11:07:28 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:41.666 11:07:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.666 11:07:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:41.666 ************************************ 00:23:41.666 START TEST ftl_dirty_shutdown 00:23:41.666 ************************************ 00:23:41.666 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:41.926 * Looking for test storage... 00:23:41.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:41.926 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:41.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.927 --rc genhtml_branch_coverage=1 00:23:41.927 --rc genhtml_function_coverage=1 00:23:41.927 --rc genhtml_legend=1 00:23:41.927 --rc geninfo_all_blocks=1 00:23:41.927 --rc geninfo_unexecuted_blocks=1 00:23:41.927 00:23:41.927 ' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:41.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.927 --rc genhtml_branch_coverage=1 00:23:41.927 --rc genhtml_function_coverage=1 00:23:41.927 --rc genhtml_legend=1 00:23:41.927 --rc geninfo_all_blocks=1 00:23:41.927 --rc geninfo_unexecuted_blocks=1 00:23:41.927 00:23:41.927 ' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:41.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.927 --rc genhtml_branch_coverage=1 00:23:41.927 --rc genhtml_function_coverage=1 00:23:41.927 --rc genhtml_legend=1 00:23:41.927 --rc geninfo_all_blocks=1 00:23:41.927 --rc geninfo_unexecuted_blocks=1 00:23:41.927 00:23:41.927 ' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:41.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.927 --rc genhtml_branch_coverage=1 00:23:41.927 --rc genhtml_function_coverage=1 00:23:41.927 --rc genhtml_legend=1 00:23:41.927 --rc geninfo_all_blocks=1 00:23:41.927 --rc geninfo_unexecuted_blocks=1 00:23:41.927 00:23:41.927 ' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78504 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78504 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 78504 ']' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.927 11:07:28 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.187 [2024-11-15 11:07:28.792022] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:23:42.187 [2024-11-15 11:07:28.792509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78504 ] 00:23:42.187 [2024-11-15 11:07:28.974833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.446 [2024-11-15 11:07:29.094573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:43.385 11:07:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:43.386 11:07:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:43.386 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:43.644 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:43.644 { 00:23:43.644 "name": "nvme0n1", 00:23:43.644 "aliases": [ 00:23:43.644 "93bc29ca-b876-4404-9031-a37fd558da9f" 00:23:43.644 ], 00:23:43.644 "product_name": "NVMe disk", 00:23:43.644 "block_size": 4096, 00:23:43.644 "num_blocks": 1310720, 00:23:43.644 "uuid": "93bc29ca-b876-4404-9031-a37fd558da9f", 00:23:43.644 "numa_id": -1, 00:23:43.644 "assigned_rate_limits": { 00:23:43.644 "rw_ios_per_sec": 0, 00:23:43.644 "rw_mbytes_per_sec": 0, 00:23:43.644 "r_mbytes_per_sec": 0, 00:23:43.644 "w_mbytes_per_sec": 0 00:23:43.644 }, 00:23:43.644 "claimed": true, 00:23:43.644 "claim_type": "read_many_write_one", 00:23:43.644 "zoned": false, 00:23:43.644 "supported_io_types": { 00:23:43.645 "read": true, 00:23:43.645 "write": true, 00:23:43.645 "unmap": true, 00:23:43.645 "flush": true, 00:23:43.645 "reset": true, 00:23:43.645 "nvme_admin": true, 00:23:43.645 "nvme_io": true, 00:23:43.645 "nvme_io_md": false, 00:23:43.645 "write_zeroes": true, 00:23:43.645 "zcopy": false, 00:23:43.645 "get_zone_info": false, 00:23:43.645 "zone_management": false, 00:23:43.645 "zone_append": false, 00:23:43.645 "compare": true, 00:23:43.645 "compare_and_write": false, 00:23:43.645 "abort": true, 00:23:43.645 "seek_hole": false, 00:23:43.645 "seek_data": false, 00:23:43.645 "copy": true, 00:23:43.645 "nvme_iov_md": false 00:23:43.645 }, 00:23:43.645 "driver_specific": { 00:23:43.645 "nvme": [ 00:23:43.645 { 00:23:43.645 "pci_address": "0000:00:11.0", 00:23:43.645 "trid": { 00:23:43.645 "trtype": "PCIe", 00:23:43.645 "traddr": "0000:00:11.0" 00:23:43.645 }, 00:23:43.645 "ctrlr_data": { 00:23:43.645 "cntlid": 0, 00:23:43.645 "vendor_id": "0x1b36", 00:23:43.645 "model_number": "QEMU NVMe Ctrl", 00:23:43.645 "serial_number": "12341", 00:23:43.645 "firmware_revision": "8.0.0", 00:23:43.645 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:43.645 "oacs": { 00:23:43.645 "security": 0, 00:23:43.645 "format": 1, 00:23:43.645 "firmware": 0, 00:23:43.645 "ns_manage": 1 00:23:43.645 }, 00:23:43.645 "multi_ctrlr": false, 00:23:43.645 "ana_reporting": false 00:23:43.645 }, 00:23:43.645 "vs": { 00:23:43.645 "nvme_version": "1.4" 00:23:43.645 }, 00:23:43.645 "ns_data": { 00:23:43.645 "id": 1, 00:23:43.645 "can_share": false 00:23:43.645 } 00:23:43.645 } 00:23:43.645 ], 00:23:43.645 "mp_policy": "active_passive" 00:23:43.645 } 00:23:43.645 } 00:23:43.645 ]' 00:23:43.645 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:43.645 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:43.645 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=c9e7e241-fcf9-44a9-aac2-c0e43e734204 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:43.904 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9e7e241-fcf9-44a9-aac2-c0e43e734204 00:23:44.162 11:07:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:44.420 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=db8059ac-1ebb-473e-a811-7a6b379eb895 00:23:44.420 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u db8059ac-1ebb-473e-a811-7a6b379eb895 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:44.679 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:44.938 { 00:23:44.938 "name": "53af7f7b-c123-47c4-8a75-d2b390aa48b1", 00:23:44.938 "aliases": [ 00:23:44.938 "lvs/nvme0n1p0" 00:23:44.938 ], 00:23:44.938 "product_name": "Logical Volume", 00:23:44.938 "block_size": 4096, 00:23:44.938 "num_blocks": 26476544, 00:23:44.938 "uuid": "53af7f7b-c123-47c4-8a75-d2b390aa48b1", 00:23:44.938 "assigned_rate_limits": { 00:23:44.938 "rw_ios_per_sec": 0, 00:23:44.938 "rw_mbytes_per_sec": 0, 00:23:44.938 "r_mbytes_per_sec": 0, 00:23:44.938 "w_mbytes_per_sec": 0 00:23:44.938 }, 00:23:44.938 "claimed": false, 00:23:44.938 "zoned": false, 00:23:44.938 "supported_io_types": { 00:23:44.938 "read": true, 00:23:44.938 "write": true, 00:23:44.938 "unmap": true, 00:23:44.938 "flush": false, 00:23:44.938 "reset": true, 00:23:44.938 "nvme_admin": false, 00:23:44.938 "nvme_io": false, 00:23:44.938 "nvme_io_md": false, 00:23:44.938 "write_zeroes": true, 00:23:44.938 "zcopy": false, 00:23:44.938 "get_zone_info": false, 00:23:44.938 "zone_management": false, 00:23:44.938 "zone_append": false, 00:23:44.938 "compare": false, 00:23:44.938 "compare_and_write": false, 00:23:44.938 "abort": false, 00:23:44.938 "seek_hole": true, 00:23:44.938 "seek_data": true, 00:23:44.938 "copy": false, 00:23:44.938 "nvme_iov_md": false 00:23:44.938 }, 00:23:44.938 "driver_specific": { 00:23:44.938 "lvol": { 00:23:44.938 "lvol_store_uuid": "db8059ac-1ebb-473e-a811-7a6b379eb895", 00:23:44.938 "base_bdev": "nvme0n1", 00:23:44.938 "thin_provision": true, 00:23:44.938 "num_allocated_clusters": 0, 00:23:44.938 "snapshot": false, 00:23:44.938 "clone": false, 00:23:44.938 "esnap_clone": false 00:23:44.938 } 00:23:44.938 } 00:23:44.938 } 00:23:44.938 ]' 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:44.938 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:45.196 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:45.196 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:45.197 11:07:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:45.197 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:45.197 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:45.197 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:45.197 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:45.197 11:07:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:45.455 { 00:23:45.455 "name": "53af7f7b-c123-47c4-8a75-d2b390aa48b1", 00:23:45.455 "aliases": [ 00:23:45.455 "lvs/nvme0n1p0" 00:23:45.455 ], 00:23:45.455 "product_name": "Logical Volume", 00:23:45.455 "block_size": 4096, 00:23:45.455 "num_blocks": 26476544, 00:23:45.455 "uuid": "53af7f7b-c123-47c4-8a75-d2b390aa48b1", 00:23:45.455 "assigned_rate_limits": { 00:23:45.455 "rw_ios_per_sec": 0, 00:23:45.455 "rw_mbytes_per_sec": 0, 00:23:45.455 "r_mbytes_per_sec": 0, 00:23:45.455 "w_mbytes_per_sec": 0 00:23:45.455 }, 00:23:45.455 "claimed": false, 00:23:45.455 "zoned": false, 00:23:45.455 "supported_io_types": { 00:23:45.455 "read": true, 00:23:45.455 "write": true, 00:23:45.455 "unmap": true, 00:23:45.455 "flush": false, 00:23:45.455 "reset": true, 00:23:45.455 "nvme_admin": false, 00:23:45.455 "nvme_io": false, 00:23:45.455 "nvme_io_md": false, 00:23:45.455 "write_zeroes": true, 00:23:45.455 "zcopy": false, 00:23:45.455 "get_zone_info": false, 00:23:45.455 "zone_management": false, 00:23:45.455 "zone_append": false, 00:23:45.455 "compare": false, 00:23:45.455 "compare_and_write": false, 00:23:45.455 "abort": false, 00:23:45.455 "seek_hole": true, 00:23:45.455 "seek_data": true, 00:23:45.455 "copy": false, 00:23:45.455 "nvme_iov_md": false 00:23:45.455 }, 00:23:45.455 "driver_specific": { 00:23:45.455 "lvol": { 00:23:45.455 "lvol_store_uuid": "db8059ac-1ebb-473e-a811-7a6b379eb895", 00:23:45.455 "base_bdev": "nvme0n1", 00:23:45.455 "thin_provision": true, 00:23:45.455 "num_allocated_clusters": 0, 00:23:45.455 "snapshot": false, 00:23:45.455 "clone": false, 00:23:45.455 "esnap_clone": false 00:23:45.455 } 00:23:45.455 } 00:23:45.455 } 00:23:45.455 ]' 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:45.455 11:07:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:23:45.714 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 53af7f7b-c123-47c4-8a75-d2b390aa48b1 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:45.973 { 00:23:45.973 "name": "53af7f7b-c123-47c4-8a75-d2b390aa48b1", 00:23:45.973 "aliases": [ 00:23:45.973 "lvs/nvme0n1p0" 00:23:45.973 ], 00:23:45.973 "product_name": "Logical Volume", 00:23:45.973 "block_size": 4096, 00:23:45.973 "num_blocks": 26476544, 00:23:45.973 "uuid": "53af7f7b-c123-47c4-8a75-d2b390aa48b1", 00:23:45.973 "assigned_rate_limits": { 00:23:45.973 "rw_ios_per_sec": 0, 00:23:45.973 "rw_mbytes_per_sec": 0, 00:23:45.973 "r_mbytes_per_sec": 0, 00:23:45.973 "w_mbytes_per_sec": 0 00:23:45.973 }, 00:23:45.973 "claimed": false, 00:23:45.973 "zoned": false, 00:23:45.973 "supported_io_types": { 00:23:45.973 "read": true, 00:23:45.973 "write": true, 00:23:45.973 "unmap": true, 00:23:45.973 "flush": false, 00:23:45.973 "reset": true, 00:23:45.973 "nvme_admin": false, 00:23:45.973 "nvme_io": false, 00:23:45.973 "nvme_io_md": false, 00:23:45.973 "write_zeroes": true, 00:23:45.973 "zcopy": false, 00:23:45.973 "get_zone_info": false, 00:23:45.973 "zone_management": false, 00:23:45.973 "zone_append": false, 00:23:45.973 "compare": false, 00:23:45.973 "compare_and_write": false, 00:23:45.973 "abort": false, 00:23:45.973 "seek_hole": true, 00:23:45.973 "seek_data": true, 00:23:45.973 "copy": false, 00:23:45.973 "nvme_iov_md": false 00:23:45.973 }, 00:23:45.973 "driver_specific": { 00:23:45.973 "lvol": { 00:23:45.973 "lvol_store_uuid": "db8059ac-1ebb-473e-a811-7a6b379eb895", 00:23:45.973 "base_bdev": "nvme0n1", 00:23:45.973 "thin_provision": true, 00:23:45.973 "num_allocated_clusters": 0, 00:23:45.973 "snapshot": false, 00:23:45.973 "clone": false, 00:23:45.973 "esnap_clone": false 00:23:45.973 } 00:23:45.973 } 00:23:45.973 } 00:23:45.973 ]' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 53af7f7b-c123-47c4-8a75-d2b390aa48b1 --l2p_dram_limit 10' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:45.973 11:07:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 53af7f7b-c123-47c4-8a75-d2b390aa48b1 --l2p_dram_limit 10 -c nvc0n1p0 00:23:46.233 [2024-11-15 11:07:32.940323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.940610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:46.234 [2024-11-15 11:07:32.940645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:46.234 [2024-11-15 11:07:32.940660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.940765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.940781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.234 [2024-11-15 11:07:32.940799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:46.234 [2024-11-15 11:07:32.940812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.940847] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:46.234 [2024-11-15 11:07:32.941864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:46.234 [2024-11-15 11:07:32.941898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.941910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.234 [2024-11-15 11:07:32.941926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:23:46.234 [2024-11-15 11:07:32.941936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.942020] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1730afc2-28ce-4d76-a5dc-be7a05b3dc82 00:23:46.234 [2024-11-15 11:07:32.943479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.943502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:46.234 [2024-11-15 11:07:32.943514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:46.234 [2024-11-15 11:07:32.943538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.951099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.951272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.234 [2024-11-15 11:07:32.951298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:23:46.234 [2024-11-15 11:07:32.951312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.951424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.951441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.234 [2024-11-15 11:07:32.951452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:46.234 [2024-11-15 11:07:32.951469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.951531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.951557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:46.234 [2024-11-15 11:07:32.951568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:46.234 [2024-11-15 11:07:32.951584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.951611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:46.234 [2024-11-15 11:07:32.956603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.956634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.234 [2024-11-15 11:07:32.956650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.005 ms 00:23:46.234 [2024-11-15 11:07:32.956676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.956712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.956723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:46.234 [2024-11-15 11:07:32.956736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:46.234 [2024-11-15 11:07:32.956747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.956792] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:46.234 [2024-11-15 11:07:32.956918] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:46.234 [2024-11-15 11:07:32.956938] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:46.234 [2024-11-15 11:07:32.956951] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:46.234 [2024-11-15 11:07:32.956966] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:46.234 [2024-11-15 11:07:32.956979] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:46.234 [2024-11-15 11:07:32.956992] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:46.234 [2024-11-15 11:07:32.957002] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:46.234 [2024-11-15 11:07:32.957017] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:46.234 [2024-11-15 11:07:32.957027] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:46.234 [2024-11-15 11:07:32.957040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.957050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:46.234 [2024-11-15 11:07:32.957063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:23:46.234 [2024-11-15 11:07:32.957084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.957162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.234 [2024-11-15 11:07:32.957173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:46.234 [2024-11-15 11:07:32.957185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:46.234 [2024-11-15 11:07:32.957194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.234 [2024-11-15 11:07:32.957286] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:46.234 [2024-11-15 11:07:32.957298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:46.234 [2024-11-15 11:07:32.957310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:46.234 [2024-11-15 11:07:32.957321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:46.234 [2024-11-15 11:07:32.957343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:46.234 [2024-11-15 11:07:32.957364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:46.234 [2024-11-15 11:07:32.957376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:46.234 [2024-11-15 11:07:32.957397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:46.234 [2024-11-15 11:07:32.957407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:46.234 [2024-11-15 11:07:32.957419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:46.234 [2024-11-15 11:07:32.957429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:46.234 [2024-11-15 11:07:32.957443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:46.234 [2024-11-15 11:07:32.957452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:46.234 [2024-11-15 11:07:32.957476] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:46.234 [2024-11-15 11:07:32.957489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:46.234 [2024-11-15 11:07:32.957511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.234 [2024-11-15 11:07:32.957531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:46.234 [2024-11-15 11:07:32.957724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.234 [2024-11-15 11:07:32.957805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:46.234 [2024-11-15 11:07:32.957837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.234 [2024-11-15 11:07:32.957898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:46.234 [2024-11-15 11:07:32.957929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:46.234 [2024-11-15 11:07:32.957960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:46.234 [2024-11-15 11:07:32.958056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:46.234 [2024-11-15 11:07:32.958099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:46.234 [2024-11-15 11:07:32.958129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:46.234 [2024-11-15 11:07:32.958162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:46.234 [2024-11-15 11:07:32.958191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:46.234 [2024-11-15 11:07:32.958222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:46.234 [2024-11-15 11:07:32.958251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:46.234 [2024-11-15 11:07:32.958285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:46.234 [2024-11-15 11:07:32.958378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.234 [2024-11-15 11:07:32.958417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:46.234 [2024-11-15 11:07:32.958447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:46.234 [2024-11-15 11:07:32.958479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.234 [2024-11-15 11:07:32.958508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:46.234 [2024-11-15 11:07:32.958665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:46.234 [2024-11-15 11:07:32.958698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:46.234 [2024-11-15 11:07:32.958733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:46.235 [2024-11-15 11:07:32.958764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:46.235 [2024-11-15 11:07:32.958848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:46.235 [2024-11-15 11:07:32.958884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:46.235 [2024-11-15 11:07:32.958917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:46.235 [2024-11-15 11:07:32.959087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:46.235 [2024-11-15 11:07:32.959106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:46.235 [2024-11-15 11:07:32.959122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:46.235 [2024-11-15 11:07:32.959139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:46.235 [2024-11-15 11:07:32.959167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:46.235 [2024-11-15 11:07:32.959178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:46.235 [2024-11-15 11:07:32.959191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:46.235 [2024-11-15 11:07:32.959201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:46.235 [2024-11-15 11:07:32.959214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:46.235 [2024-11-15 11:07:32.959225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:46.235 [2024-11-15 11:07:32.959238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:46.235 [2024-11-15 11:07:32.959248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:46.235 [2024-11-15 11:07:32.959264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:46.235 [2024-11-15 11:07:32.959323] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:46.235 [2024-11-15 11:07:32.959337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959349] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:46.235 [2024-11-15 11:07:32.959362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:46.235 [2024-11-15 11:07:32.959372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:46.235 [2024-11-15 11:07:32.959386] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:46.235 [2024-11-15 11:07:32.959398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.235 [2024-11-15 11:07:32.959411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:46.235 [2024-11-15 11:07:32.959430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.171 ms 00:23:46.235 [2024-11-15 11:07:32.959444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.235 [2024-11-15 11:07:32.959496] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:46.235 [2024-11-15 11:07:32.959514] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:49.524 [2024-11-15 11:07:36.247117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.524 [2024-11-15 11:07:36.247201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:49.524 [2024-11-15 11:07:36.247220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3292.956 ms 00:23:49.524 [2024-11-15 11:07:36.247234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.524 [2024-11-15 11:07:36.287278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.524 [2024-11-15 11:07:36.287344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.524 [2024-11-15 11:07:36.287361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.783 ms 00:23:49.525 [2024-11-15 11:07:36.287375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.287790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.287851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:49.525 [2024-11-15 11:07:36.287867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.302 ms 00:23:49.525 [2024-11-15 11:07:36.287885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.338857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.338923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.525 [2024-11-15 11:07:36.338939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.957 ms 00:23:49.525 [2024-11-15 11:07:36.338953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.339010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.339030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.525 [2024-11-15 11:07:36.339042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.525 [2024-11-15 11:07:36.339054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.339580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.339603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.525 [2024-11-15 11:07:36.339615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:23:49.525 [2024-11-15 11:07:36.339628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.339737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.339752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.525 [2024-11-15 11:07:36.339765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:23:49.525 [2024-11-15 11:07:36.339780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.361698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.361763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.525 [2024-11-15 11:07:36.361780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.929 ms 00:23:49.525 [2024-11-15 11:07:36.361793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.525 [2024-11-15 11:07:36.375267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:49.525 [2024-11-15 11:07:36.378563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.525 [2024-11-15 11:07:36.378598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:49.525 [2024-11-15 11:07:36.378617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.665 ms 00:23:49.525 [2024-11-15 11:07:36.378627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.784 [2024-11-15 11:07:36.479468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.784 [2024-11-15 11:07:36.479548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:49.784 [2024-11-15 11:07:36.479570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.949 ms 00:23:49.784 [2024-11-15 11:07:36.479593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.784 [2024-11-15 11:07:36.479828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.784 [2024-11-15 11:07:36.479848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:49.784 [2024-11-15 11:07:36.479866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:23:49.784 [2024-11-15 11:07:36.479876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.784 [2024-11-15 11:07:36.516742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.784 [2024-11-15 11:07:36.516923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:49.784 [2024-11-15 11:07:36.516952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.862 ms 00:23:49.784 [2024-11-15 11:07:36.516963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.784 [2024-11-15 11:07:36.553666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.784 [2024-11-15 11:07:36.553716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:49.784 [2024-11-15 11:07:36.553736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.709 ms 00:23:49.784 [2024-11-15 11:07:36.553746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.784 [2024-11-15 11:07:36.554522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.784 [2024-11-15 11:07:36.554554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:49.784 [2024-11-15 11:07:36.554569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:23:49.784 [2024-11-15 11:07:36.554580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.654291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.044 [2024-11-15 11:07:36.654357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:50.044 [2024-11-15 11:07:36.654381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.801 ms 00:23:50.044 [2024-11-15 11:07:36.654393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.692617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.044 [2024-11-15 11:07:36.692672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:50.044 [2024-11-15 11:07:36.692690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.185 ms 00:23:50.044 [2024-11-15 11:07:36.692701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.729998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.044 [2024-11-15 11:07:36.730058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:50.044 [2024-11-15 11:07:36.730077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.303 ms 00:23:50.044 [2024-11-15 11:07:36.730089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.767305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.044 [2024-11-15 11:07:36.767355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:50.044 [2024-11-15 11:07:36.767372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.224 ms 00:23:50.044 [2024-11-15 11:07:36.767383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.767449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.044 [2024-11-15 11:07:36.767462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:50.044 [2024-11-15 11:07:36.767479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:50.044 [2024-11-15 11:07:36.767490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.767614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.044 [2024-11-15 11:07:36.767628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:50.044 [2024-11-15 11:07:36.767645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:50.044 [2024-11-15 11:07:36.767656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.044 [2024-11-15 11:07:36.768655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3834.093 ms, result 0 00:23:50.044 { 00:23:50.044 "name": "ftl0", 00:23:50.044 "uuid": "1730afc2-28ce-4d76-a5dc-be7a05b3dc82" 00:23:50.044 } 00:23:50.044 11:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:50.044 11:07:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:50.304 11:07:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:50.304 11:07:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:50.304 11:07:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:50.564 /dev/nbd0 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:50.564 1+0 records in 00:23:50.564 1+0 records out 00:23:50.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406063 s, 10.1 MB/s 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:23:50.564 11:07:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:50.823 [2024-11-15 11:07:37.443088] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:23:50.823 [2024-11-15 11:07:37.443698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78647 ] 00:23:50.823 [2024-11-15 11:07:37.626703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.083 [2024-11-15 11:07:37.748825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.461  [2024-11-15T11:07:40.257Z] Copying: 197/1024 [MB] (197 MBps) [2024-11-15T11:07:41.196Z] Copying: 396/1024 [MB] (198 MBps) [2024-11-15T11:07:42.134Z] Copying: 595/1024 [MB] (198 MBps) [2024-11-15T11:07:43.080Z] Copying: 788/1024 [MB] (193 MBps) [2024-11-15T11:07:43.340Z] Copying: 976/1024 [MB] (187 MBps) [2024-11-15T11:07:44.725Z] Copying: 1024/1024 [MB] (average 195 MBps) 00:23:57.864 00:23:57.864 11:07:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:59.769 11:07:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:59.769 [2024-11-15 11:07:46.313417] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:23:59.769 [2024-11-15 11:07:46.313617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78741 ] 00:23:59.769 [2024-11-15 11:07:46.495820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.769 [2024-11-15 11:07:46.612303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.147  [2024-11-15T11:07:48.945Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-15T11:07:50.322Z] Copying: 35/1024 [MB] (17 MBps) [2024-11-15T11:07:51.264Z] Copying: 52/1024 [MB] (17 MBps) [2024-11-15T11:07:52.206Z] Copying: 69/1024 [MB] (16 MBps) [2024-11-15T11:07:53.144Z] Copying: 86/1024 [MB] (16 MBps) [2024-11-15T11:07:54.082Z] Copying: 103/1024 [MB] (17 MBps) [2024-11-15T11:07:55.019Z] Copying: 121/1024 [MB] (18 MBps) [2024-11-15T11:07:55.957Z] Copying: 139/1024 [MB] (17 MBps) [2024-11-15T11:07:57.335Z] Copying: 157/1024 [MB] (18 MBps) [2024-11-15T11:07:58.272Z] Copying: 175/1024 [MB] (17 MBps) [2024-11-15T11:07:59.210Z] Copying: 193/1024 [MB] (17 MBps) [2024-11-15T11:08:00.154Z] Copying: 211/1024 [MB] (18 MBps) [2024-11-15T11:08:01.091Z] Copying: 229/1024 [MB] (17 MBps) [2024-11-15T11:08:02.027Z] Copying: 246/1024 [MB] (17 MBps) [2024-11-15T11:08:02.963Z] Copying: 264/1024 [MB] (18 MBps) [2024-11-15T11:08:04.342Z] Copying: 282/1024 [MB] (18 MBps) [2024-11-15T11:08:04.908Z] Copying: 300/1024 [MB] (17 MBps) [2024-11-15T11:08:06.287Z] Copying: 318/1024 [MB] (17 MBps) [2024-11-15T11:08:07.225Z] Copying: 336/1024 [MB] (17 MBps) [2024-11-15T11:08:08.163Z] Copying: 353/1024 [MB] (16 MBps) [2024-11-15T11:08:09.112Z] Copying: 369/1024 [MB] (16 MBps) [2024-11-15T11:08:10.047Z] Copying: 386/1024 [MB] (16 MBps) [2024-11-15T11:08:10.983Z] Copying: 403/1024 [MB] (17 MBps) [2024-11-15T11:08:11.919Z] Copying: 420/1024 [MB] (17 MBps) [2024-11-15T11:08:13.298Z] Copying: 437/1024 [MB] (16 MBps) [2024-11-15T11:08:14.234Z] Copying: 454/1024 [MB] (16 MBps) [2024-11-15T11:08:15.170Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-15T11:08:16.105Z] Copying: 487/1024 [MB] (16 MBps) [2024-11-15T11:08:17.040Z] Copying: 504/1024 [MB] (16 MBps) [2024-11-15T11:08:17.977Z] Copying: 521/1024 [MB] (16 MBps) [2024-11-15T11:08:18.912Z] Copying: 538/1024 [MB] (17 MBps) [2024-11-15T11:08:20.290Z] Copying: 555/1024 [MB] (17 MBps) [2024-11-15T11:08:21.225Z] Copying: 572/1024 [MB] (16 MBps) [2024-11-15T11:08:22.162Z] Copying: 588/1024 [MB] (16 MBps) [2024-11-15T11:08:23.096Z] Copying: 605/1024 [MB] (16 MBps) [2024-11-15T11:08:24.033Z] Copying: 622/1024 [MB] (16 MBps) [2024-11-15T11:08:24.969Z] Copying: 638/1024 [MB] (16 MBps) [2024-11-15T11:08:25.905Z] Copying: 655/1024 [MB] (16 MBps) [2024-11-15T11:08:27.282Z] Copying: 670/1024 [MB] (15 MBps) [2024-11-15T11:08:28.218Z] Copying: 687/1024 [MB] (16 MBps) [2024-11-15T11:08:29.155Z] Copying: 704/1024 [MB] (16 MBps) [2024-11-15T11:08:30.094Z] Copying: 721/1024 [MB] (17 MBps) [2024-11-15T11:08:31.047Z] Copying: 739/1024 [MB] (17 MBps) [2024-11-15T11:08:31.983Z] Copying: 756/1024 [MB] (17 MBps) [2024-11-15T11:08:32.919Z] Copying: 774/1024 [MB] (17 MBps) [2024-11-15T11:08:33.854Z] Copying: 792/1024 [MB] (17 MBps) [2024-11-15T11:08:35.231Z] Copying: 809/1024 [MB] (17 MBps) [2024-11-15T11:08:36.167Z] Copying: 827/1024 [MB] (17 MBps) [2024-11-15T11:08:37.101Z] Copying: 844/1024 [MB] (17 MBps) [2024-11-15T11:08:38.037Z] Copying: 862/1024 [MB] (17 MBps) [2024-11-15T11:08:38.972Z] Copying: 879/1024 [MB] (17 MBps) [2024-11-15T11:08:39.915Z] Copying: 897/1024 [MB] (17 MBps) [2024-11-15T11:08:40.876Z] Copying: 915/1024 [MB] (17 MBps) [2024-11-15T11:08:42.255Z] Copying: 933/1024 [MB] (18 MBps) [2024-11-15T11:08:43.193Z] Copying: 951/1024 [MB] (18 MBps) [2024-11-15T11:08:44.129Z] Copying: 969/1024 [MB] (17 MBps) [2024-11-15T11:08:45.065Z] Copying: 986/1024 [MB] (17 MBps) [2024-11-15T11:08:45.999Z] Copying: 1003/1024 [MB] (17 MBps) [2024-11-15T11:08:46.257Z] Copying: 1020/1024 [MB] (16 MBps) [2024-11-15T11:08:47.193Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:25:00.332 00:25:00.332 11:08:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:00.590 11:08:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:00.590 11:08:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:00.849 [2024-11-15 11:08:47.595295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.595378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.849 [2024-11-15 11:08:47.595398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:00.849 [2024-11-15 11:08:47.595414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.595444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.849 [2024-11-15 11:08:47.600430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.600470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.849 [2024-11-15 11:08:47.600487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.961 ms 00:25:00.849 [2024-11-15 11:08:47.600499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.602553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.602594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.849 [2024-11-15 11:08:47.602612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.989 ms 00:25:00.849 [2024-11-15 11:08:47.602624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.620928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.620975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:00.849 [2024-11-15 11:08:47.620994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.305 ms 00:25:00.849 [2024-11-15 11:08:47.621005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.626106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.626142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:00.849 [2024-11-15 11:08:47.626159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.061 ms 00:25:00.849 [2024-11-15 11:08:47.626169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.664504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.664556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:00.849 [2024-11-15 11:08:47.664574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.254 ms 00:25:00.849 [2024-11-15 11:08:47.664585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.687928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.688130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:00.849 [2024-11-15 11:08:47.688161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.325 ms 00:25:00.849 [2024-11-15 11:08:47.688176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-11-15 11:08:47.688345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-11-15 11:08:47.688360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:00.849 [2024-11-15 11:08:47.688375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:00.849 [2024-11-15 11:08:47.688386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.109 [2024-11-15 11:08:47.725916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.109 [2024-11-15 11:08:47.726073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:01.109 [2024-11-15 11:08:47.726101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.565 ms 00:25:01.109 [2024-11-15 11:08:47.726112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.109 [2024-11-15 11:08:47.762135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.109 [2024-11-15 11:08:47.762174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:01.109 [2024-11-15 11:08:47.762192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.008 ms 00:25:01.109 [2024-11-15 11:08:47.762202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.109 [2024-11-15 11:08:47.797431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.109 [2024-11-15 11:08:47.797628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:01.109 [2024-11-15 11:08:47.797657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.236 ms 00:25:01.109 [2024-11-15 11:08:47.797667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.109 [2024-11-15 11:08:47.833602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.109 [2024-11-15 11:08:47.833639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:01.109 [2024-11-15 11:08:47.833656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.887 ms 00:25:01.109 [2024-11-15 11:08:47.833667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.109 [2024-11-15 11:08:47.833713] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:01.109 [2024-11-15 11:08:47.833733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.833991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:01.109 [2024-11-15 11:08:47.834230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.834992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:01.110 [2024-11-15 11:08:47.835116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:01.110 [2024-11-15 11:08:47.835129] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1730afc2-28ce-4d76-a5dc-be7a05b3dc82 00:25:01.110 [2024-11-15 11:08:47.835141] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:01.110 [2024-11-15 11:08:47.835158] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:01.110 [2024-11-15 11:08:47.835168] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:01.110 [2024-11-15 11:08:47.835187] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:01.110 [2024-11-15 11:08:47.835197] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:01.110 [2024-11-15 11:08:47.835212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:01.111 [2024-11-15 11:08:47.835222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:01.111 [2024-11-15 11:08:47.835235] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:01.111 [2024-11-15 11:08:47.835244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:01.111 [2024-11-15 11:08:47.835258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.111 [2024-11-15 11:08:47.835269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:01.111 [2024-11-15 11:08:47.835284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.551 ms 00:25:01.111 [2024-11-15 11:08:47.835294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.111 [2024-11-15 11:08:47.857311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.111 [2024-11-15 11:08:47.857465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:01.111 [2024-11-15 11:08:47.857496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.980 ms 00:25:01.111 [2024-11-15 11:08:47.857507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.111 [2024-11-15 11:08:47.858142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.111 [2024-11-15 11:08:47.858160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:01.111 [2024-11-15 11:08:47.858175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.571 ms 00:25:01.111 [2024-11-15 11:08:47.858185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.111 [2024-11-15 11:08:47.929376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.111 [2024-11-15 11:08:47.929424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:01.111 [2024-11-15 11:08:47.929443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.111 [2024-11-15 11:08:47.929454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.111 [2024-11-15 11:08:47.929548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.111 [2024-11-15 11:08:47.929569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:01.111 [2024-11-15 11:08:47.929584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.111 [2024-11-15 11:08:47.929596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.111 [2024-11-15 11:08:47.929697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.111 [2024-11-15 11:08:47.929712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:01.111 [2024-11-15 11:08:47.929731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.111 [2024-11-15 11:08:47.929743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.111 [2024-11-15 11:08:47.929772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.111 [2024-11-15 11:08:47.929784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:01.111 [2024-11-15 11:08:47.929797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.111 [2024-11-15 11:08:47.929808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.067556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.067632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:01.370 [2024-11-15 11:08:48.067652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.067664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.174340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.174411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:01.370 [2024-11-15 11:08:48.174431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.174443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.174643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.174659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:01.370 [2024-11-15 11:08:48.174674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.174690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.174772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.174785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:01.370 [2024-11-15 11:08:48.174799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.174810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.174949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.174963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:01.370 [2024-11-15 11:08:48.174978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.174989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.175038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.175051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:01.370 [2024-11-15 11:08:48.175067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.175077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.175127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.175139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:01.370 [2024-11-15 11:08:48.175153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.175163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.175229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:01.370 [2024-11-15 11:08:48.175248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:01.370 [2024-11-15 11:08:48.175269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:01.370 [2024-11-15 11:08:48.175286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.370 [2024-11-15 11:08:48.175458] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 581.054 ms, result 0 00:25:01.370 true 00:25:01.370 11:08:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78504 00:25:01.370 11:08:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78504 00:25:01.370 11:08:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:01.629 [2024-11-15 11:08:48.304079] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:25:01.629 [2024-11-15 11:08:48.304213] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79372 ] 00:25:01.629 [2024-11-15 11:08:48.487288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.887 [2024-11-15 11:08:48.637433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.263  [2024-11-15T11:08:51.061Z] Copying: 193/1024 [MB] (193 MBps) [2024-11-15T11:08:52.438Z] Copying: 392/1024 [MB] (198 MBps) [2024-11-15T11:08:53.373Z] Copying: 590/1024 [MB] (198 MBps) [2024-11-15T11:08:54.309Z] Copying: 787/1024 [MB] (196 MBps) [2024-11-15T11:08:54.309Z] Copying: 986/1024 [MB] (198 MBps) [2024-11-15T11:08:55.686Z] Copying: 1024/1024 [MB] (average 196 MBps) 00:25:08.825 00:25:08.825 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78504 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:08.825 11:08:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:08.825 [2024-11-15 11:08:55.574028] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:25:08.825 [2024-11-15 11:08:55.574153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79447 ] 00:25:09.083 [2024-11-15 11:08:55.756921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.083 [2024-11-15 11:08:55.892383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.652 [2024-11-15 11:08:56.297237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.652 [2024-11-15 11:08:56.297313] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:09.652 [2024-11-15 11:08:56.364731] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:09.652 [2024-11-15 11:08:56.365305] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:09.652 [2024-11-15 11:08:56.365572] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:09.912 [2024-11-15 11:08:56.693190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.693416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:09.912 [2024-11-15 11:08:56.693442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:09.912 [2024-11-15 11:08:56.693453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.693520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.693548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:09.912 [2024-11-15 11:08:56.693568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:09.912 [2024-11-15 11:08:56.693578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.693602] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:09.912 [2024-11-15 11:08:56.694595] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:09.912 [2024-11-15 11:08:56.694619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.694631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:09.912 [2024-11-15 11:08:56.694644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:25:09.912 [2024-11-15 11:08:56.694654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.697095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:09.912 [2024-11-15 11:08:56.717501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.717563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:09.912 [2024-11-15 11:08:56.717579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.439 ms 00:25:09.912 [2024-11-15 11:08:56.717606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.717673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.717687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:09.912 [2024-11-15 11:08:56.717699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:09.912 [2024-11-15 11:08:56.717709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.729980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.730010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:09.912 [2024-11-15 11:08:56.730024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.214 ms 00:25:09.912 [2024-11-15 11:08:56.730035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.730123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.730137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:09.912 [2024-11-15 11:08:56.730148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:09.912 [2024-11-15 11:08:56.730159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.912 [2024-11-15 11:08:56.730218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.912 [2024-11-15 11:08:56.730235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:09.912 [2024-11-15 11:08:56.730246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:09.913 [2024-11-15 11:08:56.730257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.913 [2024-11-15 11:08:56.730284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:09.913 [2024-11-15 11:08:56.736053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.913 [2024-11-15 11:08:56.736250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:09.913 [2024-11-15 11:08:56.736272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.785 ms 00:25:09.913 [2024-11-15 11:08:56.736283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.913 [2024-11-15 11:08:56.736324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.913 [2024-11-15 11:08:56.736336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:09.913 [2024-11-15 11:08:56.736348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:09.913 [2024-11-15 11:08:56.736359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.913 [2024-11-15 11:08:56.736398] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:09.913 [2024-11-15 11:08:56.736431] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:09.913 [2024-11-15 11:08:56.736470] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:09.913 [2024-11-15 11:08:56.736490] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:09.913 [2024-11-15 11:08:56.736602] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:09.913 [2024-11-15 11:08:56.736618] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:09.913 [2024-11-15 11:08:56.736632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:09.913 [2024-11-15 11:08:56.736646] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:09.913 [2024-11-15 11:08:56.736663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:09.913 [2024-11-15 11:08:56.736675] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:09.913 [2024-11-15 11:08:56.736686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:09.913 [2024-11-15 11:08:56.736698] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:09.913 [2024-11-15 11:08:56.736709] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:09.913 [2024-11-15 11:08:56.736721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.913 [2024-11-15 11:08:56.736732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:09.913 [2024-11-15 11:08:56.736744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:25:09.913 [2024-11-15 11:08:56.736754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.913 [2024-11-15 11:08:56.736827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.913 [2024-11-15 11:08:56.736843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:09.913 [2024-11-15 11:08:56.736854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:09.913 [2024-11-15 11:08:56.736864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:09.913 [2024-11-15 11:08:56.736964] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:09.913 [2024-11-15 11:08:56.736979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:09.913 [2024-11-15 11:08:56.736991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:09.913 [2024-11-15 11:08:56.737023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:09.913 [2024-11-15 11:08:56.737055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737064] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.913 [2024-11-15 11:08:56.737076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:09.913 [2024-11-15 11:08:56.737097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:09.913 [2024-11-15 11:08:56.737106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:09.913 [2024-11-15 11:08:56.737116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:09.913 [2024-11-15 11:08:56.737126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:09.913 [2024-11-15 11:08:56.737136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:09.913 [2024-11-15 11:08:56.737155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:09.913 [2024-11-15 11:08:56.737184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737202] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:09.913 [2024-11-15 11:08:56.737212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:09.913 [2024-11-15 11:08:56.737240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:09.913 [2024-11-15 11:08:56.737266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:09.913 [2024-11-15 11:08:56.737294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.913 [2024-11-15 11:08:56.737312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:09.913 [2024-11-15 11:08:56.737321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:09.913 [2024-11-15 11:08:56.737330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:09.913 [2024-11-15 11:08:56.737339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:09.913 [2024-11-15 11:08:56.737348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:09.913 [2024-11-15 11:08:56.737357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:09.913 [2024-11-15 11:08:56.737375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:09.913 [2024-11-15 11:08:56.737384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737396] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:09.913 [2024-11-15 11:08:56.737407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:09.913 [2024-11-15 11:08:56.737417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:09.913 [2024-11-15 11:08:56.737432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:09.913 [2024-11-15 11:08:56.737442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:09.913 [2024-11-15 11:08:56.737452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:09.913 [2024-11-15 11:08:56.737462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:09.913 [2024-11-15 11:08:56.737471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:09.914 [2024-11-15 11:08:56.737480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:09.914 [2024-11-15 11:08:56.737490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:09.914 [2024-11-15 11:08:56.737501] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:09.914 [2024-11-15 11:08:56.737514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:09.914 [2024-11-15 11:08:56.737567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:09.914 [2024-11-15 11:08:56.737578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:09.914 [2024-11-15 11:08:56.737589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:09.914 [2024-11-15 11:08:56.737601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:09.914 [2024-11-15 11:08:56.737612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:09.914 [2024-11-15 11:08:56.737623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:09.914 [2024-11-15 11:08:56.737634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:09.914 [2024-11-15 11:08:56.737644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:09.914 [2024-11-15 11:08:56.737655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:09.914 [2024-11-15 11:08:56.737707] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:09.914 [2024-11-15 11:08:56.737719] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:09.914 [2024-11-15 11:08:56.737741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:09.914 [2024-11-15 11:08:56.737751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:09.914 [2024-11-15 11:08:56.737764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:09.914 [2024-11-15 11:08:56.737775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:09.914 [2024-11-15 11:08:56.737787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:09.914 [2024-11-15 11:08:56.737798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:25:09.914 [2024-11-15 11:08:56.737809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.786658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.786693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:10.174 [2024-11-15 11:08:56.786707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.876 ms 00:25:10.174 [2024-11-15 11:08:56.786717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.786797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.786814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:10.174 [2024-11-15 11:08:56.786825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:10.174 [2024-11-15 11:08:56.786836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.851382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.851570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:10.174 [2024-11-15 11:08:56.851648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.563 ms 00:25:10.174 [2024-11-15 11:08:56.851696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.851756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.851788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:10.174 [2024-11-15 11:08:56.851819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:10.174 [2024-11-15 11:08:56.851848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.852700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.852808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:10.174 [2024-11-15 11:08:56.852883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.770 ms 00:25:10.174 [2024-11-15 11:08:56.852917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.853082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.853351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:10.174 [2024-11-15 11:08:56.853389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:25:10.174 [2024-11-15 11:08:56.853418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.875868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.876016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:10.174 [2024-11-15 11:08:56.876121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.440 ms 00:25:10.174 [2024-11-15 11:08:56.876159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.896289] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:10.174 [2024-11-15 11:08:56.896462] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:10.174 [2024-11-15 11:08:56.896641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.896678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:10.174 [2024-11-15 11:08:56.896711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.372 ms 00:25:10.174 [2024-11-15 11:08:56.896747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.927372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.927512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:10.174 [2024-11-15 11:08:56.927621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.601 ms 00:25:10.174 [2024-11-15 11:08:56.927662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.946375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.946513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:10.174 [2024-11-15 11:08:56.946633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.615 ms 00:25:10.174 [2024-11-15 11:08:56.946671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.963889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.964033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:10.174 [2024-11-15 11:08:56.964138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.142 ms 00:25:10.174 [2024-11-15 11:08:56.964172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.174 [2024-11-15 11:08:56.964980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.174 [2024-11-15 11:08:56.965100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:10.174 [2024-11-15 11:08:56.965170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:25:10.174 [2024-11-15 11:08:56.965205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.060601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.060866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:10.433 [2024-11-15 11:08:57.060989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.504 ms 00:25:10.433 [2024-11-15 11:08:57.061028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.071684] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:10.433 [2024-11-15 11:08:57.075165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.075306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:10.433 [2024-11-15 11:08:57.075444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.094 ms 00:25:10.433 [2024-11-15 11:08:57.075461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.075611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.075627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:10.433 [2024-11-15 11:08:57.075639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:10.433 [2024-11-15 11:08:57.075650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.075743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.075757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:10.433 [2024-11-15 11:08:57.075768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:10.433 [2024-11-15 11:08:57.075779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.075804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.075820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:10.433 [2024-11-15 11:08:57.075831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:10.433 [2024-11-15 11:08:57.075842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.075882] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:10.433 [2024-11-15 11:08:57.075895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.075906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:10.433 [2024-11-15 11:08:57.075916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:25:10.433 [2024-11-15 11:08:57.075926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.112035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.112186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:10.433 [2024-11-15 11:08:57.112343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.141 ms 00:25:10.433 [2024-11-15 11:08:57.112433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.112548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.433 [2024-11-15 11:08:57.112658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:10.433 [2024-11-15 11:08:57.112696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:10.433 [2024-11-15 11:08:57.112758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.433 [2024-11-15 11:08:57.114267] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.223 ms, result 0 00:25:11.368  [2024-11-15T11:08:59.165Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-15T11:09:00.543Z] Copying: 48/1024 [MB] (23 MBps) [2024-11-15T11:09:01.478Z] Copying: 71/1024 [MB] (23 MBps) [2024-11-15T11:09:02.439Z] Copying: 94/1024 [MB] (23 MBps) [2024-11-15T11:09:03.373Z] Copying: 118/1024 [MB] (23 MBps) [2024-11-15T11:09:04.309Z] Copying: 141/1024 [MB] (23 MBps) [2024-11-15T11:09:05.249Z] Copying: 166/1024 [MB] (24 MBps) [2024-11-15T11:09:06.183Z] Copying: 190/1024 [MB] (23 MBps) [2024-11-15T11:09:07.119Z] Copying: 212/1024 [MB] (22 MBps) [2024-11-15T11:09:08.493Z] Copying: 237/1024 [MB] (24 MBps) [2024-11-15T11:09:09.429Z] Copying: 263/1024 [MB] (26 MBps) [2024-11-15T11:09:10.362Z] Copying: 289/1024 [MB] (25 MBps) [2024-11-15T11:09:11.297Z] Copying: 315/1024 [MB] (25 MBps) [2024-11-15T11:09:12.232Z] Copying: 340/1024 [MB] (25 MBps) [2024-11-15T11:09:13.168Z] Copying: 366/1024 [MB] (26 MBps) [2024-11-15T11:09:14.103Z] Copying: 392/1024 [MB] (25 MBps) [2024-11-15T11:09:15.479Z] Copying: 417/1024 [MB] (25 MBps) [2024-11-15T11:09:16.415Z] Copying: 445/1024 [MB] (27 MBps) [2024-11-15T11:09:17.352Z] Copying: 471/1024 [MB] (26 MBps) [2024-11-15T11:09:18.289Z] Copying: 498/1024 [MB] (26 MBps) [2024-11-15T11:09:19.224Z] Copying: 524/1024 [MB] (25 MBps) [2024-11-15T11:09:20.177Z] Copying: 549/1024 [MB] (25 MBps) [2024-11-15T11:09:21.111Z] Copying: 575/1024 [MB] (25 MBps) [2024-11-15T11:09:22.487Z] Copying: 600/1024 [MB] (25 MBps) [2024-11-15T11:09:23.421Z] Copying: 625/1024 [MB] (25 MBps) [2024-11-15T11:09:24.356Z] Copying: 650/1024 [MB] (25 MBps) [2024-11-15T11:09:25.287Z] Copying: 676/1024 [MB] (25 MBps) [2024-11-15T11:09:26.221Z] Copying: 701/1024 [MB] (24 MBps) [2024-11-15T11:09:27.155Z] Copying: 725/1024 [MB] (24 MBps) [2024-11-15T11:09:28.088Z] Copying: 751/1024 [MB] (26 MBps) [2024-11-15T11:09:29.085Z] Copying: 778/1024 [MB] (26 MBps) [2024-11-15T11:09:30.462Z] Copying: 805/1024 [MB] (26 MBps) [2024-11-15T11:09:31.398Z] Copying: 832/1024 [MB] (26 MBps) [2024-11-15T11:09:32.334Z] Copying: 859/1024 [MB] (27 MBps) [2024-11-15T11:09:33.270Z] Copying: 885/1024 [MB] (26 MBps) [2024-11-15T11:09:34.207Z] Copying: 912/1024 [MB] (26 MBps) [2024-11-15T11:09:35.148Z] Copying: 937/1024 [MB] (25 MBps) [2024-11-15T11:09:36.087Z] Copying: 964/1024 [MB] (26 MBps) [2024-11-15T11:09:37.465Z] Copying: 989/1024 [MB] (24 MBps) [2024-11-15T11:09:38.403Z] Copying: 1013/1024 [MB] (23 MBps) [2024-11-15T11:09:38.403Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-15T11:09:38.403Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-15 11:09:38.262651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.262715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:51.542 [2024-11-15 11:09:38.262733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:51.542 [2024-11-15 11:09:38.262744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.542 [2024-11-15 11:09:38.263995] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:51.542 [2024-11-15 11:09:38.268654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.268696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:51.542 [2024-11-15 11:09:38.268711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:25:51.542 [2024-11-15 11:09:38.268721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.542 [2024-11-15 11:09:38.278234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.278273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:51.542 [2024-11-15 11:09:38.278287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.013 ms 00:25:51.542 [2024-11-15 11:09:38.278299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.542 [2024-11-15 11:09:38.302112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.302155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:51.542 [2024-11-15 11:09:38.302170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.833 ms 00:25:51.542 [2024-11-15 11:09:38.302181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.542 [2024-11-15 11:09:38.307261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.307303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:51.542 [2024-11-15 11:09:38.307316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.052 ms 00:25:51.542 [2024-11-15 11:09:38.307326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.542 [2024-11-15 11:09:38.344180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.344218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:51.542 [2024-11-15 11:09:38.344232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.825 ms 00:25:51.542 [2024-11-15 11:09:38.344242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.542 [2024-11-15 11:09:38.365614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.542 [2024-11-15 11:09:38.365651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:51.542 [2024-11-15 11:09:38.365667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.368 ms 00:25:51.542 [2024-11-15 11:09:38.365677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.802 [2024-11-15 11:09:38.484712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.802 [2024-11-15 11:09:38.484883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:51.802 [2024-11-15 11:09:38.484906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.186 ms 00:25:51.802 [2024-11-15 11:09:38.484924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.802 [2024-11-15 11:09:38.521630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.802 [2024-11-15 11:09:38.521669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:51.802 [2024-11-15 11:09:38.521684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.740 ms 00:25:51.802 [2024-11-15 11:09:38.521694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.802 [2024-11-15 11:09:38.556887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.802 [2024-11-15 11:09:38.556923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:51.802 [2024-11-15 11:09:38.556936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.212 ms 00:25:51.803 [2024-11-15 11:09:38.556946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.803 [2024-11-15 11:09:38.593087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.803 [2024-11-15 11:09:38.593122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:51.803 [2024-11-15 11:09:38.593136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.162 ms 00:25:51.803 [2024-11-15 11:09:38.593145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.803 [2024-11-15 11:09:38.628051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.803 [2024-11-15 11:09:38.628085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:51.803 [2024-11-15 11:09:38.628099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.887 ms 00:25:51.803 [2024-11-15 11:09:38.628109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.803 [2024-11-15 11:09:38.628145] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:51.803 [2024-11-15 11:09:38.628162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108288 / 261120 wr_cnt: 1 state: open 00:25:51.803 [2024-11-15 11:09:38.628175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.628989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:51.803 [2024-11-15 11:09:38.629000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:51.804 [2024-11-15 11:09:38.629260] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:51.804 [2024-11-15 11:09:38.629270] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1730afc2-28ce-4d76-a5dc-be7a05b3dc82 00:25:51.804 [2024-11-15 11:09:38.629282] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108288 00:25:51.804 [2024-11-15 11:09:38.629297] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 109248 00:25:51.804 [2024-11-15 11:09:38.629316] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108288 00:25:51.804 [2024-11-15 11:09:38.629327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:25:51.804 [2024-11-15 11:09:38.629336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:51.804 [2024-11-15 11:09:38.629347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:51.804 [2024-11-15 11:09:38.629357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:51.804 [2024-11-15 11:09:38.629366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:51.804 [2024-11-15 11:09:38.629375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:51.804 [2024-11-15 11:09:38.629384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.804 [2024-11-15 11:09:38.629395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:51.804 [2024-11-15 11:09:38.629405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.243 ms 00:25:51.804 [2024-11-15 11:09:38.629416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.804 [2024-11-15 11:09:38.648884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.804 [2024-11-15 11:09:38.648918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:51.804 [2024-11-15 11:09:38.648932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.465 ms 00:25:51.804 [2024-11-15 11:09:38.648942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:51.804 [2024-11-15 11:09:38.649478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:51.804 [2024-11-15 11:09:38.649493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:51.804 [2024-11-15 11:09:38.649505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:25:51.804 [2024-11-15 11:09:38.649514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.063 [2024-11-15 11:09:38.700682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.063 [2024-11-15 11:09:38.700720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:52.063 [2024-11-15 11:09:38.700733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.063 [2024-11-15 11:09:38.700743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.063 [2024-11-15 11:09:38.700798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.063 [2024-11-15 11:09:38.700810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:52.063 [2024-11-15 11:09:38.700821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.063 [2024-11-15 11:09:38.700832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.063 [2024-11-15 11:09:38.700898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.063 [2024-11-15 11:09:38.700912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:52.063 [2024-11-15 11:09:38.700923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.063 [2024-11-15 11:09:38.700932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.063 [2024-11-15 11:09:38.700949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.063 [2024-11-15 11:09:38.700959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:52.063 [2024-11-15 11:09:38.700969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.063 [2024-11-15 11:09:38.700979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.063 [2024-11-15 11:09:38.827117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.063 [2024-11-15 11:09:38.827179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:52.063 [2024-11-15 11:09:38.827195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.063 [2024-11-15 11:09:38.827205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.322 [2024-11-15 11:09:38.928298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.322 [2024-11-15 11:09:38.928505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:52.322 [2024-11-15 11:09:38.928541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.322 [2024-11-15 11:09:38.928554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.322 [2024-11-15 11:09:38.928664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.322 [2024-11-15 11:09:38.928676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:52.322 [2024-11-15 11:09:38.928687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.322 [2024-11-15 11:09:38.928697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.322 [2024-11-15 11:09:38.928743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.322 [2024-11-15 11:09:38.928755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:52.322 [2024-11-15 11:09:38.928766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.322 [2024-11-15 11:09:38.928775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.322 [2024-11-15 11:09:38.928889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.322 [2024-11-15 11:09:38.928907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:52.322 [2024-11-15 11:09:38.928918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.322 [2024-11-15 11:09:38.928928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.322 [2024-11-15 11:09:38.928965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.323 [2024-11-15 11:09:38.928977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:52.323 [2024-11-15 11:09:38.928987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.323 [2024-11-15 11:09:38.928997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.323 [2024-11-15 11:09:38.929036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.323 [2024-11-15 11:09:38.929051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:52.323 [2024-11-15 11:09:38.929061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.323 [2024-11-15 11:09:38.929071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.323 [2024-11-15 11:09:38.929112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:52.323 [2024-11-15 11:09:38.929124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:52.323 [2024-11-15 11:09:38.929135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:52.323 [2024-11-15 11:09:38.929145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:52.323 [2024-11-15 11:09:38.929266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 668.289 ms, result 0 00:25:53.700 00:25:53.700 00:25:53.700 11:09:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:55.611 11:09:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:55.611 [2024-11-15 11:09:42.136014] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:25:55.611 [2024-11-15 11:09:42.136520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79916 ] 00:25:55.611 [2024-11-15 11:09:42.317728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.611 [2024-11-15 11:09:42.437946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.179 [2024-11-15 11:09:42.813188] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.179 [2024-11-15 11:09:42.813454] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:56.179 [2024-11-15 11:09:42.976945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:42.977156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:56.179 [2024-11-15 11:09:42.977275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:56.179 [2024-11-15 11:09:42.977317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:42.977419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:42.977543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:56.179 [2024-11-15 11:09:42.977602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:56.179 [2024-11-15 11:09:42.977650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:42.977820] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:56.179 [2024-11-15 11:09:42.978924] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:56.179 [2024-11-15 11:09:42.979092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:42.979193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:56.179 [2024-11-15 11:09:42.979278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.280 ms 00:25:56.179 [2024-11-15 11:09:42.979316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:42.981027] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:56.179 [2024-11-15 11:09:43.001973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.002012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:56.179 [2024-11-15 11:09:43.002027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.980 ms 00:25:56.179 [2024-11-15 11:09:43.002039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.002123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.002137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:56.179 [2024-11-15 11:09:43.002149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:56.179 [2024-11-15 11:09:43.002160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.009465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.009623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:56.179 [2024-11-15 11:09:43.009646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.228 ms 00:25:56.179 [2024-11-15 11:09:43.009658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.009752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.009766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:56.179 [2024-11-15 11:09:43.009777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:56.179 [2024-11-15 11:09:43.009788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.009832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.009844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:56.179 [2024-11-15 11:09:43.009855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:56.179 [2024-11-15 11:09:43.009865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.009891] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:56.179 [2024-11-15 11:09:43.015188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.015221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:56.179 [2024-11-15 11:09:43.015235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.311 ms 00:25:56.179 [2024-11-15 11:09:43.015249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.015281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.015304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:56.179 [2024-11-15 11:09:43.015316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:56.179 [2024-11-15 11:09:43.015327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.179 [2024-11-15 11:09:43.015382] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:56.179 [2024-11-15 11:09:43.015407] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:56.179 [2024-11-15 11:09:43.015454] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:56.179 [2024-11-15 11:09:43.015477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:56.179 [2024-11-15 11:09:43.015598] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:56.179 [2024-11-15 11:09:43.015614] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:56.179 [2024-11-15 11:09:43.015628] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:56.179 [2024-11-15 11:09:43.015643] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:56.179 [2024-11-15 11:09:43.015655] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:56.179 [2024-11-15 11:09:43.015667] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:56.179 [2024-11-15 11:09:43.015686] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:56.179 [2024-11-15 11:09:43.015697] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:56.179 [2024-11-15 11:09:43.015707] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:56.179 [2024-11-15 11:09:43.015723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.179 [2024-11-15 11:09:43.015733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:56.180 [2024-11-15 11:09:43.015744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:25:56.180 [2024-11-15 11:09:43.015755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.180 [2024-11-15 11:09:43.015845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.180 [2024-11-15 11:09:43.015858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:56.180 [2024-11-15 11:09:43.015869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:56.180 [2024-11-15 11:09:43.015880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.180 [2024-11-15 11:09:43.015986] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:56.180 [2024-11-15 11:09:43.016007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:56.180 [2024-11-15 11:09:43.016019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:56.180 [2024-11-15 11:09:43.016059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:56.180 [2024-11-15 11:09:43.016090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.180 [2024-11-15 11:09:43.016110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:56.180 [2024-11-15 11:09:43.016120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:56.180 [2024-11-15 11:09:43.016130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:56.180 [2024-11-15 11:09:43.016140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:56.180 [2024-11-15 11:09:43.016150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:56.180 [2024-11-15 11:09:43.016169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:56.180 [2024-11-15 11:09:43.016200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:56.180 [2024-11-15 11:09:43.016230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:56.180 [2024-11-15 11:09:43.016258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016277] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:56.180 [2024-11-15 11:09:43.016286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:56.180 [2024-11-15 11:09:43.016325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:56.180 [2024-11-15 11:09:43.016354] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.180 [2024-11-15 11:09:43.016373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:56.180 [2024-11-15 11:09:43.016382] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:56.180 [2024-11-15 11:09:43.016391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:56.180 [2024-11-15 11:09:43.016401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:56.180 [2024-11-15 11:09:43.016411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:56.180 [2024-11-15 11:09:43.016421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:56.180 [2024-11-15 11:09:43.016448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:56.180 [2024-11-15 11:09:43.016457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016467] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:56.180 [2024-11-15 11:09:43.016478] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:56.180 [2024-11-15 11:09:43.016489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:56.180 [2024-11-15 11:09:43.016511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:56.180 [2024-11-15 11:09:43.016522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:56.180 [2024-11-15 11:09:43.016545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:56.180 [2024-11-15 11:09:43.016555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:56.180 [2024-11-15 11:09:43.016575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:56.180 [2024-11-15 11:09:43.016585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:56.180 [2024-11-15 11:09:43.016597] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:56.180 [2024-11-15 11:09:43.016610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:56.180 [2024-11-15 11:09:43.016633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:56.180 [2024-11-15 11:09:43.016644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:56.180 [2024-11-15 11:09:43.016655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:56.180 [2024-11-15 11:09:43.016666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:56.180 [2024-11-15 11:09:43.016676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:56.180 [2024-11-15 11:09:43.016695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:56.180 [2024-11-15 11:09:43.016706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:56.180 [2024-11-15 11:09:43.016716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:56.180 [2024-11-15 11:09:43.016727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016770] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:56.180 [2024-11-15 11:09:43.016780] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:56.180 [2024-11-15 11:09:43.016796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:56.180 [2024-11-15 11:09:43.016827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:56.180 [2024-11-15 11:09:43.016838] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:56.180 [2024-11-15 11:09:43.016848] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:56.180 [2024-11-15 11:09:43.016860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.180 [2024-11-15 11:09:43.016872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:56.180 [2024-11-15 11:09:43.016883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:25:56.180 [2024-11-15 11:09:43.016893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.062384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.062430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:56.440 [2024-11-15 11:09:43.062446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.493 ms 00:25:56.440 [2024-11-15 11:09:43.062458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.062588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.062612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:56.440 [2024-11-15 11:09:43.062624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:25:56.440 [2024-11-15 11:09:43.062635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.130153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.130197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:56.440 [2024-11-15 11:09:43.130212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.553 ms 00:25:56.440 [2024-11-15 11:09:43.130223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.130284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.130298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:56.440 [2024-11-15 11:09:43.130310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:56.440 [2024-11-15 11:09:43.130325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.130876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.130894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:56.440 [2024-11-15 11:09:43.130906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:25:56.440 [2024-11-15 11:09:43.130916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.131061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.131080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:56.440 [2024-11-15 11:09:43.131092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:25:56.440 [2024-11-15 11:09:43.131108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.152914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.152958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:56.440 [2024-11-15 11:09:43.152977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.818 ms 00:25:56.440 [2024-11-15 11:09:43.153005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.173976] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:56.440 [2024-11-15 11:09:43.174023] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:56.440 [2024-11-15 11:09:43.174039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.174051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:56.440 [2024-11-15 11:09:43.174064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.943 ms 00:25:56.440 [2024-11-15 11:09:43.174073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.204567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.204618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:56.440 [2024-11-15 11:09:43.204633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.493 ms 00:25:56.440 [2024-11-15 11:09:43.204644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.223906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.224072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:56.440 [2024-11-15 11:09:43.224093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.249 ms 00:25:56.440 [2024-11-15 11:09:43.224103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.242885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.243052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:56.440 [2024-11-15 11:09:43.243073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.764 ms 00:25:56.440 [2024-11-15 11:09:43.243083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.440 [2024-11-15 11:09:43.243913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.440 [2024-11-15 11:09:43.243948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:56.440 [2024-11-15 11:09:43.243961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:25:56.440 [2024-11-15 11:09:43.243975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.332382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.332443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:56.700 [2024-11-15 11:09:43.332466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.528 ms 00:25:56.700 [2024-11-15 11:09:43.332486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.343714] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:56.700 [2024-11-15 11:09:43.346665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.346705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:56.700 [2024-11-15 11:09:43.346719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.134 ms 00:25:56.700 [2024-11-15 11:09:43.346730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.346830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.346845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:56.700 [2024-11-15 11:09:43.346856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:56.700 [2024-11-15 11:09:43.346870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.348594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.348726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:56.700 [2024-11-15 11:09:43.348839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.644 ms 00:25:56.700 [2024-11-15 11:09:43.348890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.348951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.349039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:56.700 [2024-11-15 11:09:43.349077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:56.700 [2024-11-15 11:09:43.349107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.349225] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:56.700 [2024-11-15 11:09:43.349284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.349316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:56.700 [2024-11-15 11:09:43.349400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:56.700 [2024-11-15 11:09:43.349434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.387681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.387832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:56.700 [2024-11-15 11:09:43.387977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.255 ms 00:25:56.700 [2024-11-15 11:09:43.388000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.388074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:56.700 [2024-11-15 11:09:43.388087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:56.700 [2024-11-15 11:09:43.388098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:56.700 [2024-11-15 11:09:43.388109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:56.700 [2024-11-15 11:09:43.389212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 412.445 ms, result 0 00:25:58.087  [2024-11-15T11:09:45.972Z] Copying: 1228/1048576 [kB] (1228 kBps) [2024-11-15T11:09:46.928Z] Copying: 9388/1048576 [kB] (8160 kBps) [2024-11-15T11:09:47.865Z] Copying: 41/1024 [MB] (31 MBps) [2024-11-15T11:09:48.801Z] Copying: 76/1024 [MB] (35 MBps) [2024-11-15T11:09:49.737Z] Copying: 111/1024 [MB] (35 MBps) [2024-11-15T11:09:50.674Z] Copying: 146/1024 [MB] (34 MBps) [2024-11-15T11:09:51.611Z] Copying: 181/1024 [MB] (35 MBps) [2024-11-15T11:09:52.990Z] Copying: 216/1024 [MB] (34 MBps) [2024-11-15T11:09:53.923Z] Copying: 251/1024 [MB] (34 MBps) [2024-11-15T11:09:54.861Z] Copying: 286/1024 [MB] (35 MBps) [2024-11-15T11:09:55.798Z] Copying: 320/1024 [MB] (34 MBps) [2024-11-15T11:09:56.734Z] Copying: 354/1024 [MB] (33 MBps) [2024-11-15T11:09:57.670Z] Copying: 389/1024 [MB] (34 MBps) [2024-11-15T11:09:58.606Z] Copying: 424/1024 [MB] (35 MBps) [2024-11-15T11:09:59.984Z] Copying: 459/1024 [MB] (35 MBps) [2024-11-15T11:10:00.921Z] Copying: 494/1024 [MB] (34 MBps) [2024-11-15T11:10:01.865Z] Copying: 529/1024 [MB] (35 MBps) [2024-11-15T11:10:02.818Z] Copying: 563/1024 [MB] (33 MBps) [2024-11-15T11:10:03.753Z] Copying: 597/1024 [MB] (34 MBps) [2024-11-15T11:10:04.690Z] Copying: 631/1024 [MB] (33 MBps) [2024-11-15T11:10:05.630Z] Copying: 664/1024 [MB] (33 MBps) [2024-11-15T11:10:07.008Z] Copying: 698/1024 [MB] (33 MBps) [2024-11-15T11:10:07.945Z] Copying: 729/1024 [MB] (31 MBps) [2024-11-15T11:10:08.887Z] Copying: 758/1024 [MB] (28 MBps) [2024-11-15T11:10:09.823Z] Copying: 788/1024 [MB] (30 MBps) [2024-11-15T11:10:10.760Z] Copying: 822/1024 [MB] (33 MBps) [2024-11-15T11:10:11.701Z] Copying: 854/1024 [MB] (32 MBps) [2024-11-15T11:10:12.636Z] Copying: 890/1024 [MB] (35 MBps) [2024-11-15T11:10:13.573Z] Copying: 926/1024 [MB] (35 MBps) [2024-11-15T11:10:14.948Z] Copying: 960/1024 [MB] (34 MBps) [2024-11-15T11:10:15.516Z] Copying: 995/1024 [MB] (34 MBps) [2024-11-15T11:10:15.775Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-15 11:10:15.627483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.914 [2024-11-15 11:10:15.627561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:28.914 [2024-11-15 11:10:15.627585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:28.914 [2024-11-15 11:10:15.627596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.914 [2024-11-15 11:10:15.627620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:28.914 [2024-11-15 11:10:15.632441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.914 [2024-11-15 11:10:15.632560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:28.914 [2024-11-15 11:10:15.632575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.808 ms 00:26:28.914 [2024-11-15 11:10:15.632586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.632799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.632812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:28.915 [2024-11-15 11:10:15.632827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:26:28.915 [2024-11-15 11:10:15.632837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.644491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.644529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:28.915 [2024-11-15 11:10:15.644544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.653 ms 00:26:28.915 [2024-11-15 11:10:15.644565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.649753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.649786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:28.915 [2024-11-15 11:10:15.649798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:26:28.915 [2024-11-15 11:10:15.649816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.686637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.686673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:28.915 [2024-11-15 11:10:15.686687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.814 ms 00:26:28.915 [2024-11-15 11:10:15.686698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.707240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.707275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:28.915 [2024-11-15 11:10:15.707289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.534 ms 00:26:28.915 [2024-11-15 11:10:15.707300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.709247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.709281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:28.915 [2024-11-15 11:10:15.709293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:26:28.915 [2024-11-15 11:10:15.709304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:28.915 [2024-11-15 11:10:15.745667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:28.915 [2024-11-15 11:10:15.745701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:28.915 [2024-11-15 11:10:15.745715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.397 ms 00:26:28.915 [2024-11-15 11:10:15.745726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.175 [2024-11-15 11:10:15.782343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.175 [2024-11-15 11:10:15.782394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:29.175 [2024-11-15 11:10:15.782420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.637 ms 00:26:29.175 [2024-11-15 11:10:15.782430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.175 [2024-11-15 11:10:15.819239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.175 [2024-11-15 11:10:15.819273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:29.175 [2024-11-15 11:10:15.819286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.830 ms 00:26:29.175 [2024-11-15 11:10:15.819296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.175 [2024-11-15 11:10:15.854942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.175 [2024-11-15 11:10:15.854978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:29.175 [2024-11-15 11:10:15.854991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.625 ms 00:26:29.175 [2024-11-15 11:10:15.855001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.176 [2024-11-15 11:10:15.855038] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:29.176 [2024-11-15 11:10:15.855055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:29.176 [2024-11-15 11:10:15.855069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:29.176 [2024-11-15 11:10:15.855080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:29.176 [2024-11-15 11:10:15.855996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:29.177 [2024-11-15 11:10:15.856164] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:29.177 [2024-11-15 11:10:15.856174] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1730afc2-28ce-4d76-a5dc-be7a05b3dc82 00:26:29.177 [2024-11-15 11:10:15.856185] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:29.177 [2024-11-15 11:10:15.856195] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156352 00:26:29.177 [2024-11-15 11:10:15.856205] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154368 00:26:29.177 [2024-11-15 11:10:15.856219] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0129 00:26:29.177 [2024-11-15 11:10:15.856228] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:29.177 [2024-11-15 11:10:15.856238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:29.177 [2024-11-15 11:10:15.856247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:29.177 [2024-11-15 11:10:15.856267] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:29.177 [2024-11-15 11:10:15.856276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:29.177 [2024-11-15 11:10:15.856286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.177 [2024-11-15 11:10:15.856296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:29.177 [2024-11-15 11:10:15.856307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.251 ms 00:26:29.177 [2024-11-15 11:10:15.856317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.177 [2024-11-15 11:10:15.875387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.177 [2024-11-15 11:10:15.875425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:29.177 [2024-11-15 11:10:15.875437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.065 ms 00:26:29.177 [2024-11-15 11:10:15.875447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.177 [2024-11-15 11:10:15.875992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:29.177 [2024-11-15 11:10:15.876009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:29.177 [2024-11-15 11:10:15.876019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:26:29.177 [2024-11-15 11:10:15.876030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.177 [2024-11-15 11:10:15.927204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.177 [2024-11-15 11:10:15.927239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:29.177 [2024-11-15 11:10:15.927268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.177 [2024-11-15 11:10:15.927279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.177 [2024-11-15 11:10:15.927332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.177 [2024-11-15 11:10:15.927343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:29.177 [2024-11-15 11:10:15.927354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.177 [2024-11-15 11:10:15.927364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.177 [2024-11-15 11:10:15.927427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.177 [2024-11-15 11:10:15.927446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:29.177 [2024-11-15 11:10:15.927456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.177 [2024-11-15 11:10:15.927466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.177 [2024-11-15 11:10:15.927483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.177 [2024-11-15 11:10:15.927493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:29.177 [2024-11-15 11:10:15.927504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.177 [2024-11-15 11:10:15.927514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.436 [2024-11-15 11:10:16.051779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.436 [2024-11-15 11:10:16.051860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:29.436 [2024-11-15 11:10:16.051874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.436 [2024-11-15 11:10:16.051884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.436 [2024-11-15 11:10:16.153352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.436 [2024-11-15 11:10:16.153423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:29.436 [2024-11-15 11:10:16.153438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.436 [2024-11-15 11:10:16.153448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.436 [2024-11-15 11:10:16.153556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.436 [2024-11-15 11:10:16.153578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:29.437 [2024-11-15 11:10:16.153593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.437 [2024-11-15 11:10:16.153603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.437 [2024-11-15 11:10:16.153655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.437 [2024-11-15 11:10:16.153667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:29.437 [2024-11-15 11:10:16.153677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.437 [2024-11-15 11:10:16.153687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.437 [2024-11-15 11:10:16.153790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.437 [2024-11-15 11:10:16.153803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:29.437 [2024-11-15 11:10:16.153813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.437 [2024-11-15 11:10:16.153827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.437 [2024-11-15 11:10:16.153863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.437 [2024-11-15 11:10:16.153875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:29.437 [2024-11-15 11:10:16.153885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.437 [2024-11-15 11:10:16.153895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.437 [2024-11-15 11:10:16.153933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.437 [2024-11-15 11:10:16.153944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:29.437 [2024-11-15 11:10:16.153955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.437 [2024-11-15 11:10:16.153969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.437 [2024-11-15 11:10:16.154011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:29.437 [2024-11-15 11:10:16.154023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:29.437 [2024-11-15 11:10:16.154033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:29.437 [2024-11-15 11:10:16.154043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:29.437 [2024-11-15 11:10:16.154163] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 527.503 ms, result 0 00:26:30.369 00:26:30.369 00:26:30.369 11:10:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:32.271 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:32.271 11:10:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:32.271 [2024-11-15 11:10:19.062280] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:26:32.271 [2024-11-15 11:10:19.062402] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80295 ] 00:26:32.529 [2024-11-15 11:10:19.242609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.529 [2024-11-15 11:10:19.360029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.099 [2024-11-15 11:10:19.718483] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.099 [2024-11-15 11:10:19.718566] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:33.099 [2024-11-15 11:10:19.880056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.880132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:33.099 [2024-11-15 11:10:19.880154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:33.099 [2024-11-15 11:10:19.880167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.880219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.880233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:33.099 [2024-11-15 11:10:19.880248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:33.099 [2024-11-15 11:10:19.880259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.880282] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:33.099 [2024-11-15 11:10:19.881392] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:33.099 [2024-11-15 11:10:19.881427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.881440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:33.099 [2024-11-15 11:10:19.881452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:26:33.099 [2024-11-15 11:10:19.881463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.883045] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:33.099 [2024-11-15 11:10:19.901854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.901911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:33.099 [2024-11-15 11:10:19.901932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.839 ms 00:26:33.099 [2024-11-15 11:10:19.901949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.902038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.902058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:33.099 [2024-11-15 11:10:19.902075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:33.099 [2024-11-15 11:10:19.902091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.910553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.910618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:33.099 [2024-11-15 11:10:19.910638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.371 ms 00:26:33.099 [2024-11-15 11:10:19.910654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.910769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.910788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:33.099 [2024-11-15 11:10:19.910804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:26:33.099 [2024-11-15 11:10:19.910818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.910879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.910894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:33.099 [2024-11-15 11:10:19.910910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:33.099 [2024-11-15 11:10:19.910924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.910958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:33.099 [2024-11-15 11:10:19.915959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.916009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:33.099 [2024-11-15 11:10:19.916030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.017 ms 00:26:33.099 [2024-11-15 11:10:19.916051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.916093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.916111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:33.099 [2024-11-15 11:10:19.916128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:33.099 [2024-11-15 11:10:19.916143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.916217] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:33.099 [2024-11-15 11:10:19.916251] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:33.099 [2024-11-15 11:10:19.916296] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:33.099 [2024-11-15 11:10:19.916325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:33.099 [2024-11-15 11:10:19.916427] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:33.099 [2024-11-15 11:10:19.916447] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:33.099 [2024-11-15 11:10:19.916466] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:33.099 [2024-11-15 11:10:19.916485] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:33.099 [2024-11-15 11:10:19.916502] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:33.099 [2024-11-15 11:10:19.916517] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:33.099 [2024-11-15 11:10:19.916552] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:33.099 [2024-11-15 11:10:19.916566] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:33.099 [2024-11-15 11:10:19.916579] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:33.099 [2024-11-15 11:10:19.916601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.916616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:33.099 [2024-11-15 11:10:19.916634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:26:33.099 [2024-11-15 11:10:19.916648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.916741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.099 [2024-11-15 11:10:19.916757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:33.099 [2024-11-15 11:10:19.916773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:26:33.099 [2024-11-15 11:10:19.916787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.099 [2024-11-15 11:10:19.916898] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:33.100 [2024-11-15 11:10:19.916934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:33.100 [2024-11-15 11:10:19.916950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.100 [2024-11-15 11:10:19.916965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.916980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:33.100 [2024-11-15 11:10:19.916994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:33.100 [2024-11-15 11:10:19.917034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.100 [2024-11-15 11:10:19.917061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:33.100 [2024-11-15 11:10:19.917074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:33.100 [2024-11-15 11:10:19.917087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:33.100 [2024-11-15 11:10:19.917099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:33.100 [2024-11-15 11:10:19.917113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:33.100 [2024-11-15 11:10:19.917138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:33.100 [2024-11-15 11:10:19.917165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:33.100 [2024-11-15 11:10:19.917204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:33.100 [2024-11-15 11:10:19.917245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:33.100 [2024-11-15 11:10:19.917282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:33.100 [2024-11-15 11:10:19.917322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:33.100 [2024-11-15 11:10:19.917362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.100 [2024-11-15 11:10:19.917387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:33.100 [2024-11-15 11:10:19.917400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:33.100 [2024-11-15 11:10:19.917413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:33.100 [2024-11-15 11:10:19.917426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:33.100 [2024-11-15 11:10:19.917440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:33.100 [2024-11-15 11:10:19.917451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:33.100 [2024-11-15 11:10:19.917479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:33.100 [2024-11-15 11:10:19.917493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917506] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:33.100 [2024-11-15 11:10:19.917545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:33.100 [2024-11-15 11:10:19.917562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:33.100 [2024-11-15 11:10:19.917604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:33.100 [2024-11-15 11:10:19.917618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:33.100 [2024-11-15 11:10:19.917634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:33.100 [2024-11-15 11:10:19.917647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:33.100 [2024-11-15 11:10:19.917659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:33.100 [2024-11-15 11:10:19.917673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:33.100 [2024-11-15 11:10:19.917688] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:33.100 [2024-11-15 11:10:19.917706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:33.100 [2024-11-15 11:10:19.917738] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:33.100 [2024-11-15 11:10:19.917753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:33.100 [2024-11-15 11:10:19.917768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:33.100 [2024-11-15 11:10:19.917782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:33.100 [2024-11-15 11:10:19.917797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:33.100 [2024-11-15 11:10:19.917812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:33.100 [2024-11-15 11:10:19.917827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:33.100 [2024-11-15 11:10:19.917842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:33.100 [2024-11-15 11:10:19.917856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:33.100 [2024-11-15 11:10:19.917932] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:33.100 [2024-11-15 11:10:19.917953] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:33.100 [2024-11-15 11:10:19.917984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:33.100 [2024-11-15 11:10:19.918000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:33.100 [2024-11-15 11:10:19.918015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:33.100 [2024-11-15 11:10:19.918032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.100 [2024-11-15 11:10:19.918048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:33.100 [2024-11-15 11:10:19.918066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.193 ms 00:26:33.100 [2024-11-15 11:10:19.918083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.100 [2024-11-15 11:10:19.957174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.100 [2024-11-15 11:10:19.957224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:33.100 [2024-11-15 11:10:19.957239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.072 ms 00:26:33.100 [2024-11-15 11:10:19.957251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.100 [2024-11-15 11:10:19.957351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.100 [2024-11-15 11:10:19.957363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:33.100 [2024-11-15 11:10:19.957374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:33.100 [2024-11-15 11:10:19.957384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.014669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.014720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:33.360 [2024-11-15 11:10:20.014735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.305 ms 00:26:33.360 [2024-11-15 11:10:20.014745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.014804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.014816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:33.360 [2024-11-15 11:10:20.014828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:33.360 [2024-11-15 11:10:20.014842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.015337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.015359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:33.360 [2024-11-15 11:10:20.015371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:26:33.360 [2024-11-15 11:10:20.015381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.015505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.015534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:33.360 [2024-11-15 11:10:20.015546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:33.360 [2024-11-15 11:10:20.015563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.035334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.035378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:33.360 [2024-11-15 11:10:20.035397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.780 ms 00:26:33.360 [2024-11-15 11:10:20.035408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.055300] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:33.360 [2024-11-15 11:10:20.055342] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:33.360 [2024-11-15 11:10:20.055358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.055370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:33.360 [2024-11-15 11:10:20.055382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.845 ms 00:26:33.360 [2024-11-15 11:10:20.055393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.085194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.085243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:33.360 [2024-11-15 11:10:20.085257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.804 ms 00:26:33.360 [2024-11-15 11:10:20.085267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.103610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.103645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:33.360 [2024-11-15 11:10:20.103658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.327 ms 00:26:33.360 [2024-11-15 11:10:20.103668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.121961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.121997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:33.360 [2024-11-15 11:10:20.122010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.282 ms 00:26:33.360 [2024-11-15 11:10:20.122020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.122831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.122865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:33.360 [2024-11-15 11:10:20.122877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:26:33.360 [2024-11-15 11:10:20.122891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.360 [2024-11-15 11:10:20.210043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.360 [2024-11-15 11:10:20.210104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:33.360 [2024-11-15 11:10:20.210127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.270 ms 00:26:33.360 [2024-11-15 11:10:20.210138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.222256] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:33.619 [2024-11-15 11:10:20.225578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.225614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:33.619 [2024-11-15 11:10:20.225645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.398 ms 00:26:33.619 [2024-11-15 11:10:20.225657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.225768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.225783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:33.619 [2024-11-15 11:10:20.225795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:33.619 [2024-11-15 11:10:20.225809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.226752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.226776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:33.619 [2024-11-15 11:10:20.226788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:26:33.619 [2024-11-15 11:10:20.226798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.226830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.226841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:33.619 [2024-11-15 11:10:20.226852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:33.619 [2024-11-15 11:10:20.226861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.226897] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:33.619 [2024-11-15 11:10:20.226913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.226923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:33.619 [2024-11-15 11:10:20.226933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:33.619 [2024-11-15 11:10:20.226943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.263503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.263552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:33.619 [2024-11-15 11:10:20.263568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.598 ms 00:26:33.619 [2024-11-15 11:10:20.263585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.263671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:33.619 [2024-11-15 11:10:20.263684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:33.619 [2024-11-15 11:10:20.263696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:26:33.619 [2024-11-15 11:10:20.263706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:33.619 [2024-11-15 11:10:20.264868] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.976 ms, result 0 00:26:35.007  [2024-11-15T11:10:22.819Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-15T11:10:23.753Z] Copying: 59/1024 [MB] (28 MBps) [2024-11-15T11:10:24.687Z] Copying: 89/1024 [MB] (30 MBps) [2024-11-15T11:10:25.621Z] Copying: 119/1024 [MB] (29 MBps) [2024-11-15T11:10:26.557Z] Copying: 147/1024 [MB] (27 MBps) [2024-11-15T11:10:27.493Z] Copying: 173/1024 [MB] (26 MBps) [2024-11-15T11:10:28.870Z] Copying: 201/1024 [MB] (27 MBps) [2024-11-15T11:10:29.806Z] Copying: 230/1024 [MB] (28 MBps) [2024-11-15T11:10:30.741Z] Copying: 259/1024 [MB] (28 MBps) [2024-11-15T11:10:31.685Z] Copying: 287/1024 [MB] (28 MBps) [2024-11-15T11:10:32.621Z] Copying: 316/1024 [MB] (28 MBps) [2024-11-15T11:10:33.560Z] Copying: 344/1024 [MB] (28 MBps) [2024-11-15T11:10:34.496Z] Copying: 373/1024 [MB] (29 MBps) [2024-11-15T11:10:35.870Z] Copying: 402/1024 [MB] (28 MBps) [2024-11-15T11:10:36.807Z] Copying: 430/1024 [MB] (28 MBps) [2024-11-15T11:10:37.743Z] Copying: 458/1024 [MB] (28 MBps) [2024-11-15T11:10:38.678Z] Copying: 486/1024 [MB] (27 MBps) [2024-11-15T11:10:39.628Z] Copying: 514/1024 [MB] (27 MBps) [2024-11-15T11:10:40.587Z] Copying: 542/1024 [MB] (27 MBps) [2024-11-15T11:10:41.524Z] Copying: 570/1024 [MB] (28 MBps) [2024-11-15T11:10:42.459Z] Copying: 600/1024 [MB] (30 MBps) [2024-11-15T11:10:43.836Z] Copying: 631/1024 [MB] (30 MBps) [2024-11-15T11:10:44.772Z] Copying: 660/1024 [MB] (28 MBps) [2024-11-15T11:10:45.708Z] Copying: 689/1024 [MB] (29 MBps) [2024-11-15T11:10:46.650Z] Copying: 719/1024 [MB] (29 MBps) [2024-11-15T11:10:47.591Z] Copying: 748/1024 [MB] (28 MBps) [2024-11-15T11:10:48.529Z] Copying: 776/1024 [MB] (28 MBps) [2024-11-15T11:10:49.467Z] Copying: 805/1024 [MB] (29 MBps) [2024-11-15T11:10:50.847Z] Copying: 835/1024 [MB] (29 MBps) [2024-11-15T11:10:51.784Z] Copying: 865/1024 [MB] (30 MBps) [2024-11-15T11:10:52.719Z] Copying: 897/1024 [MB] (31 MBps) [2024-11-15T11:10:53.653Z] Copying: 926/1024 [MB] (29 MBps) [2024-11-15T11:10:54.589Z] Copying: 955/1024 [MB] (29 MBps) [2024-11-15T11:10:55.528Z] Copying: 987/1024 [MB] (32 MBps) [2024-11-15T11:10:55.788Z] Copying: 1018/1024 [MB] (30 MBps) [2024-11-15T11:10:55.788Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-15 11:10:55.714447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.927 [2024-11-15 11:10:55.714516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.927 [2024-11-15 11:10:55.714553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:08.927 [2024-11-15 11:10:55.714567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.927 [2024-11-15 11:10:55.714597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:08.927 [2024-11-15 11:10:55.719715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.927 [2024-11-15 11:10:55.719760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.927 [2024-11-15 11:10:55.719782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.105 ms 00:27:08.927 [2024-11-15 11:10:55.719795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.927 [2024-11-15 11:10:55.720037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.927 [2024-11-15 11:10:55.720052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.927 [2024-11-15 11:10:55.720065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:27:08.927 [2024-11-15 11:10:55.720077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.927 [2024-11-15 11:10:55.723688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.927 [2024-11-15 11:10:55.723724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.927 [2024-11-15 11:10:55.723739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.597 ms 00:27:08.927 [2024-11-15 11:10:55.723751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.927 [2024-11-15 11:10:55.728860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.927 [2024-11-15 11:10:55.728893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.927 [2024-11-15 11:10:55.728905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.084 ms 00:27:08.927 [2024-11-15 11:10:55.728916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.927 [2024-11-15 11:10:55.767075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.927 [2024-11-15 11:10:55.767116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.927 [2024-11-15 11:10:55.767130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.144 ms 00:27:08.927 [2024-11-15 11:10:55.767140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.787497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.187 [2024-11-15 11:10:55.787543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:09.187 [2024-11-15 11:10:55.787558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.348 ms 00:27:09.187 [2024-11-15 11:10:55.787568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.789222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.187 [2024-11-15 11:10:55.789267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.187 [2024-11-15 11:10:55.789280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:27:09.187 [2024-11-15 11:10:55.789291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.825355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.187 [2024-11-15 11:10:55.825395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.187 [2024-11-15 11:10:55.825409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.105 ms 00:27:09.187 [2024-11-15 11:10:55.825419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.861440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.187 [2024-11-15 11:10:55.861489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.187 [2024-11-15 11:10:55.861502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.040 ms 00:27:09.187 [2024-11-15 11:10:55.861513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.897041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.187 [2024-11-15 11:10:55.897084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.187 [2024-11-15 11:10:55.897098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.537 ms 00:27:09.187 [2024-11-15 11:10:55.897108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.933237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.187 [2024-11-15 11:10:55.933280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.187 [2024-11-15 11:10:55.933294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.103 ms 00:27:09.187 [2024-11-15 11:10:55.933305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.187 [2024-11-15 11:10:55.933347] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.187 [2024-11-15 11:10:55.933365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:09.187 [2024-11-15 11:10:55.933384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:09.187 [2024-11-15 11:10:55.933395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.187 [2024-11-15 11:10:55.933407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.187 [2024-11-15 11:10:55.933418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.187 [2024-11-15 11:10:55.933428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.187 [2024-11-15 11:10:55.933439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.933993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.188 [2024-11-15 11:10:55.934416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.189 [2024-11-15 11:10:55.934427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.189 [2024-11-15 11:10:55.934437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.189 [2024-11-15 11:10:55.934448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.189 [2024-11-15 11:10:55.934465] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.189 [2024-11-15 11:10:55.934479] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1730afc2-28ce-4d76-a5dc-be7a05b3dc82 00:27:09.189 [2024-11-15 11:10:55.934490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:09.189 [2024-11-15 11:10:55.934499] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:09.189 [2024-11-15 11:10:55.934509] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:09.189 [2024-11-15 11:10:55.934521] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:09.189 [2024-11-15 11:10:55.934538] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.189 [2024-11-15 11:10:55.934548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.189 [2024-11-15 11:10:55.934570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.189 [2024-11-15 11:10:55.934579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.189 [2024-11-15 11:10:55.934588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.189 [2024-11-15 11:10:55.934598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.189 [2024-11-15 11:10:55.934609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.189 [2024-11-15 11:10:55.934620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:27:09.189 [2024-11-15 11:10:55.934630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.189 [2024-11-15 11:10:55.954446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.189 [2024-11-15 11:10:55.954484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.189 [2024-11-15 11:10:55.954497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.781 ms 00:27:09.189 [2024-11-15 11:10:55.954508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.189 [2024-11-15 11:10:55.955027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.189 [2024-11-15 11:10:55.955050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.189 [2024-11-15 11:10:55.955067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:27:09.189 [2024-11-15 11:10:55.955078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.189 [2024-11-15 11:10:56.006681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.189 [2024-11-15 11:10:56.006722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.189 [2024-11-15 11:10:56.006736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.189 [2024-11-15 11:10:56.006747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.189 [2024-11-15 11:10:56.006807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.189 [2024-11-15 11:10:56.006819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.189 [2024-11-15 11:10:56.006834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.189 [2024-11-15 11:10:56.006845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.189 [2024-11-15 11:10:56.006917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.189 [2024-11-15 11:10:56.006930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.189 [2024-11-15 11:10:56.006941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.189 [2024-11-15 11:10:56.006950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.189 [2024-11-15 11:10:56.006967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.189 [2024-11-15 11:10:56.006977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.189 [2024-11-15 11:10:56.006988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.189 [2024-11-15 11:10:56.007002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.133764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.133849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.448 [2024-11-15 11:10:56.133865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.133875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.233552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.233621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.448 [2024-11-15 11:10:56.233636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.233652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.233751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.233764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.448 [2024-11-15 11:10:56.233774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.233786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.233832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.233844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.448 [2024-11-15 11:10:56.233854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.233866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.233975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.233989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.448 [2024-11-15 11:10:56.233999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.234009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.234045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.234057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.448 [2024-11-15 11:10:56.234068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.234077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.234121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.234133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.448 [2024-11-15 11:10:56.234143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.448 [2024-11-15 11:10:56.234153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.448 [2024-11-15 11:10:56.234195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.448 [2024-11-15 11:10:56.234206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.449 [2024-11-15 11:10:56.234217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.449 [2024-11-15 11:10:56.234227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.449 [2024-11-15 11:10:56.234344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.716 ms, result 0 00:27:10.396 00:27:10.396 00:27:10.654 11:10:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:12.552 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78504 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 78504 ']' 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 78504 00:27:12.552 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78504) - No such process 00:27:12.552 Process with pid 78504 is not found 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 78504 is not found' 00:27:12.552 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:12.810 11:10:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:12.810 11:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:12.810 Remove shared memory files 00:27:12.810 11:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:12.810 11:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:12.811 11:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:12.811 11:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:12.811 11:10:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:12.811 00:27:12.811 real 3m31.208s 00:27:12.811 user 3m58.106s 00:27:12.811 sys 0m39.320s 00:27:12.811 11:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.811 11:10:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.811 ************************************ 00:27:12.811 END TEST ftl_dirty_shutdown 00:27:12.811 ************************************ 00:27:13.069 11:10:59 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:13.069 11:10:59 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:13.069 11:10:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.069 11:10:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:13.069 ************************************ 00:27:13.069 START TEST ftl_upgrade_shutdown 00:27:13.069 ************************************ 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:13.069 * Looking for test storage... 00:27:13.069 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.069 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.327 --rc genhtml_branch_coverage=1 00:27:13.327 --rc genhtml_function_coverage=1 00:27:13.327 --rc genhtml_legend=1 00:27:13.327 --rc geninfo_all_blocks=1 00:27:13.327 --rc geninfo_unexecuted_blocks=1 00:27:13.327 00:27:13.327 ' 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.327 --rc genhtml_branch_coverage=1 00:27:13.327 --rc genhtml_function_coverage=1 00:27:13.327 --rc genhtml_legend=1 00:27:13.327 --rc geninfo_all_blocks=1 00:27:13.327 --rc geninfo_unexecuted_blocks=1 00:27:13.327 00:27:13.327 ' 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.327 --rc genhtml_branch_coverage=1 00:27:13.327 --rc genhtml_function_coverage=1 00:27:13.327 --rc genhtml_legend=1 00:27:13.327 --rc geninfo_all_blocks=1 00:27:13.327 --rc geninfo_unexecuted_blocks=1 00:27:13.327 00:27:13.327 ' 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:13.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.327 --rc genhtml_branch_coverage=1 00:27:13.327 --rc genhtml_function_coverage=1 00:27:13.327 --rc genhtml_legend=1 00:27:13.327 --rc geninfo_all_blocks=1 00:27:13.327 --rc geninfo_unexecuted_blocks=1 00:27:13.327 00:27:13.327 ' 00:27:13.327 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80774 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80774 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80774 ']' 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:13.328 11:10:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.328 [2024-11-15 11:11:00.086222] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:27:13.328 [2024-11-15 11:11:00.086343] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80774 ] 00:27:13.587 [2024-11-15 11:11:00.259770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.587 [2024-11-15 11:11:00.382511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:14.523 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:14.782 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:15.040 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:15.040 { 00:27:15.040 "name": "basen1", 00:27:15.040 "aliases": [ 00:27:15.040 "91f83a73-d6a4-46a7-9b5d-63f139e3542b" 00:27:15.040 ], 00:27:15.040 "product_name": "NVMe disk", 00:27:15.040 "block_size": 4096, 00:27:15.040 "num_blocks": 1310720, 00:27:15.040 "uuid": "91f83a73-d6a4-46a7-9b5d-63f139e3542b", 00:27:15.040 "numa_id": -1, 00:27:15.040 "assigned_rate_limits": { 00:27:15.040 "rw_ios_per_sec": 0, 00:27:15.040 "rw_mbytes_per_sec": 0, 00:27:15.040 "r_mbytes_per_sec": 0, 00:27:15.040 "w_mbytes_per_sec": 0 00:27:15.040 }, 00:27:15.040 "claimed": true, 00:27:15.040 "claim_type": "read_many_write_one", 00:27:15.040 "zoned": false, 00:27:15.040 "supported_io_types": { 00:27:15.040 "read": true, 00:27:15.040 "write": true, 00:27:15.040 "unmap": true, 00:27:15.040 "flush": true, 00:27:15.040 "reset": true, 00:27:15.040 "nvme_admin": true, 00:27:15.040 "nvme_io": true, 00:27:15.040 "nvme_io_md": false, 00:27:15.040 "write_zeroes": true, 00:27:15.040 "zcopy": false, 00:27:15.040 "get_zone_info": false, 00:27:15.040 "zone_management": false, 00:27:15.040 "zone_append": false, 00:27:15.040 "compare": true, 00:27:15.040 "compare_and_write": false, 00:27:15.040 "abort": true, 00:27:15.040 "seek_hole": false, 00:27:15.040 "seek_data": false, 00:27:15.040 "copy": true, 00:27:15.040 "nvme_iov_md": false 00:27:15.040 }, 00:27:15.040 "driver_specific": { 00:27:15.040 "nvme": [ 00:27:15.040 { 00:27:15.040 "pci_address": "0000:00:11.0", 00:27:15.040 "trid": { 00:27:15.040 "trtype": "PCIe", 00:27:15.040 "traddr": "0000:00:11.0" 00:27:15.040 }, 00:27:15.040 "ctrlr_data": { 00:27:15.040 "cntlid": 0, 00:27:15.040 "vendor_id": "0x1b36", 00:27:15.040 "model_number": "QEMU NVMe Ctrl", 00:27:15.040 "serial_number": "12341", 00:27:15.040 "firmware_revision": "8.0.0", 00:27:15.040 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:15.040 "oacs": { 00:27:15.040 "security": 0, 00:27:15.040 "format": 1, 00:27:15.040 "firmware": 0, 00:27:15.040 "ns_manage": 1 00:27:15.040 }, 00:27:15.040 "multi_ctrlr": false, 00:27:15.040 "ana_reporting": false 00:27:15.040 }, 00:27:15.040 "vs": { 00:27:15.040 "nvme_version": "1.4" 00:27:15.040 }, 00:27:15.040 "ns_data": { 00:27:15.040 "id": 1, 00:27:15.040 "can_share": false 00:27:15.040 } 00:27:15.040 } 00:27:15.040 ], 00:27:15.040 "mp_policy": "active_passive" 00:27:15.040 } 00:27:15.040 } 00:27:15.040 ]' 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:15.041 11:11:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:15.299 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=db8059ac-1ebb-473e-a811-7a6b379eb895 00:27:15.299 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:15.299 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db8059ac-1ebb-473e-a811-7a6b379eb895 00:27:15.556 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:15.814 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=7a19f9d8-46c3-4e41-8618-0b60ecf63afb 00:27:15.814 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 7a19f9d8-46c3-4e41-8618-0b60ecf63afb 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=f225c5cc-d440-4926-8585-51e3a9f588bb 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z f225c5cc-d440-4926-8585-51e3a9f588bb ]] 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 f225c5cc-d440-4926-8585-51e3a9f588bb 5120 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=f225c5cc-d440-4926-8585-51e3a9f588bb 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size f225c5cc-d440-4926-8585-51e3a9f588bb 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=f225c5cc-d440-4926-8585-51e3a9f588bb 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:16.073 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f225c5cc-d440-4926-8585-51e3a9f588bb 00:27:16.331 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:16.331 { 00:27:16.331 "name": "f225c5cc-d440-4926-8585-51e3a9f588bb", 00:27:16.331 "aliases": [ 00:27:16.331 "lvs/basen1p0" 00:27:16.331 ], 00:27:16.331 "product_name": "Logical Volume", 00:27:16.331 "block_size": 4096, 00:27:16.331 "num_blocks": 5242880, 00:27:16.331 "uuid": "f225c5cc-d440-4926-8585-51e3a9f588bb", 00:27:16.331 "assigned_rate_limits": { 00:27:16.331 "rw_ios_per_sec": 0, 00:27:16.331 "rw_mbytes_per_sec": 0, 00:27:16.331 "r_mbytes_per_sec": 0, 00:27:16.331 "w_mbytes_per_sec": 0 00:27:16.331 }, 00:27:16.331 "claimed": false, 00:27:16.331 "zoned": false, 00:27:16.331 "supported_io_types": { 00:27:16.331 "read": true, 00:27:16.331 "write": true, 00:27:16.331 "unmap": true, 00:27:16.331 "flush": false, 00:27:16.331 "reset": true, 00:27:16.331 "nvme_admin": false, 00:27:16.331 "nvme_io": false, 00:27:16.331 "nvme_io_md": false, 00:27:16.332 "write_zeroes": true, 00:27:16.332 "zcopy": false, 00:27:16.332 "get_zone_info": false, 00:27:16.332 "zone_management": false, 00:27:16.332 "zone_append": false, 00:27:16.332 "compare": false, 00:27:16.332 "compare_and_write": false, 00:27:16.332 "abort": false, 00:27:16.332 "seek_hole": true, 00:27:16.332 "seek_data": true, 00:27:16.332 "copy": false, 00:27:16.332 "nvme_iov_md": false 00:27:16.332 }, 00:27:16.332 "driver_specific": { 00:27:16.332 "lvol": { 00:27:16.332 "lvol_store_uuid": "7a19f9d8-46c3-4e41-8618-0b60ecf63afb", 00:27:16.332 "base_bdev": "basen1", 00:27:16.332 "thin_provision": true, 00:27:16.332 "num_allocated_clusters": 0, 00:27:16.332 "snapshot": false, 00:27:16.332 "clone": false, 00:27:16.332 "esnap_clone": false 00:27:16.332 } 00:27:16.332 } 00:27:16.332 } 00:27:16.332 ]' 00:27:16.332 11:11:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:16.332 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:16.590 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:16.590 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:16.590 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:16.849 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:16.849 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:16.849 11:11:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d f225c5cc-d440-4926-8585-51e3a9f588bb -c cachen1p0 --l2p_dram_limit 2 00:27:17.109 [2024-11-15 11:11:03.786276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.109 [2024-11-15 11:11:03.786333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:17.109 [2024-11-15 11:11:03.786352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:17.109 [2024-11-15 11:11:03.786364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.109 [2024-11-15 11:11:03.786433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.109 [2024-11-15 11:11:03.786446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:17.109 [2024-11-15 11:11:03.786460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:17.110 [2024-11-15 11:11:03.786470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.786495] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:17.110 [2024-11-15 11:11:03.787603] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:17.110 [2024-11-15 11:11:03.787640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.787652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:17.110 [2024-11-15 11:11:03.787666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.148 ms 00:27:17.110 [2024-11-15 11:11:03.787677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.787764] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 076dd754-373f-4c28-ad91-8848dad9fbe6 00:27:17.110 [2024-11-15 11:11:03.789217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.789252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:17.110 [2024-11-15 11:11:03.789265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:17.110 [2024-11-15 11:11:03.789278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.796794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.796835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:17.110 [2024-11-15 11:11:03.796851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.474 ms 00:27:17.110 [2024-11-15 11:11:03.796864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.796916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.796933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:17.110 [2024-11-15 11:11:03.796944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:17.110 [2024-11-15 11:11:03.796960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.797009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.797024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:17.110 [2024-11-15 11:11:03.797034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:17.110 [2024-11-15 11:11:03.797053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.797078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:17.110 [2024-11-15 11:11:03.802399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.802436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:17.110 [2024-11-15 11:11:03.802450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.332 ms 00:27:17.110 [2024-11-15 11:11:03.802461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.802493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.802505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:17.110 [2024-11-15 11:11:03.802518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:17.110 [2024-11-15 11:11:03.802541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.802593] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:17.110 [2024-11-15 11:11:03.802725] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:17.110 [2024-11-15 11:11:03.802745] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:17.110 [2024-11-15 11:11:03.802758] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:17.110 [2024-11-15 11:11:03.802775] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:17.110 [2024-11-15 11:11:03.802787] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:17.110 [2024-11-15 11:11:03.802801] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:17.110 [2024-11-15 11:11:03.802810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:17.110 [2024-11-15 11:11:03.802826] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:17.110 [2024-11-15 11:11:03.802836] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:17.110 [2024-11-15 11:11:03.802849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.802860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:17.110 [2024-11-15 11:11:03.802875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:27:17.110 [2024-11-15 11:11:03.802885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.802960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.110 [2024-11-15 11:11:03.802972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:17.110 [2024-11-15 11:11:03.802984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:17.110 [2024-11-15 11:11:03.803004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.110 [2024-11-15 11:11:03.803100] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:17.110 [2024-11-15 11:11:03.803113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:17.110 [2024-11-15 11:11:03.803126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:17.110 [2024-11-15 11:11:03.803137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:17.110 [2024-11-15 11:11:03.803159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:17.110 [2024-11-15 11:11:03.803180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:17.110 [2024-11-15 11:11:03.803192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:17.110 [2024-11-15 11:11:03.803201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:17.110 [2024-11-15 11:11:03.803223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:17.110 [2024-11-15 11:11:03.803234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:17.110 [2024-11-15 11:11:03.803258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:17.110 [2024-11-15 11:11:03.803268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:17.110 [2024-11-15 11:11:03.803293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:17.110 [2024-11-15 11:11:03.803304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:17.110 [2024-11-15 11:11:03.803326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:17.110 [2024-11-15 11:11:03.803336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:17.110 [2024-11-15 11:11:03.803347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:17.110 [2024-11-15 11:11:03.803356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:17.110 [2024-11-15 11:11:03.803368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:17.110 [2024-11-15 11:11:03.803377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:17.110 [2024-11-15 11:11:03.803389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:17.110 [2024-11-15 11:11:03.803397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:17.110 [2024-11-15 11:11:03.803409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:17.110 [2024-11-15 11:11:03.803418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:17.110 [2024-11-15 11:11:03.803430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:17.110 [2024-11-15 11:11:03.803440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:17.110 [2024-11-15 11:11:03.803453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:17.110 [2024-11-15 11:11:03.803462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.110 [2024-11-15 11:11:03.803474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:17.110 [2024-11-15 11:11:03.803483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:17.111 [2024-11-15 11:11:03.803494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.111 [2024-11-15 11:11:03.803503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:17.111 [2024-11-15 11:11:03.803515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:17.111 [2024-11-15 11:11:03.803535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.111 [2024-11-15 11:11:03.803547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:17.111 [2024-11-15 11:11:03.803557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:17.111 [2024-11-15 11:11:03.803570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.111 [2024-11-15 11:11:03.803579] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:17.111 [2024-11-15 11:11:03.803592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:17.111 [2024-11-15 11:11:03.803602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:17.111 [2024-11-15 11:11:03.803615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:17.111 [2024-11-15 11:11:03.803626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:17.111 [2024-11-15 11:11:03.803640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:17.111 [2024-11-15 11:11:03.803649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:17.111 [2024-11-15 11:11:03.803661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:17.111 [2024-11-15 11:11:03.803671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:17.111 [2024-11-15 11:11:03.803683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:17.111 [2024-11-15 11:11:03.803697] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:17.111 [2024-11-15 11:11:03.803713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:17.111 [2024-11-15 11:11:03.803740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803763] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:17.111 [2024-11-15 11:11:03.803774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:17.111 [2024-11-15 11:11:03.803787] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:17.111 [2024-11-15 11:11:03.803798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:17.111 [2024-11-15 11:11:03.803811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:17.111 [2024-11-15 11:11:03.803895] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:17.111 [2024-11-15 11:11:03.803909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:17.111 [2024-11-15 11:11:03.803933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:17.111 [2024-11-15 11:11:03.803943] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:17.111 [2024-11-15 11:11:03.803956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:17.111 [2024-11-15 11:11:03.803966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.111 [2024-11-15 11:11:03.803979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:17.111 [2024-11-15 11:11:03.803988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.926 ms 00:27:17.111 [2024-11-15 11:11:03.804002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.111 [2024-11-15 11:11:03.804042] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:17.111 [2024-11-15 11:11:03.804061] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:20.421 [2024-11-15 11:11:06.732260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.732325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:20.421 [2024-11-15 11:11:06.732343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2932.970 ms 00:27:20.421 [2024-11-15 11:11:06.732358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.771676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.771729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:20.421 [2024-11-15 11:11:06.771745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.024 ms 00:27:20.421 [2024-11-15 11:11:06.771758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.771868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.771885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:20.421 [2024-11-15 11:11:06.771897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:27:20.421 [2024-11-15 11:11:06.771913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.816140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.816189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:20.421 [2024-11-15 11:11:06.816203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.252 ms 00:27:20.421 [2024-11-15 11:11:06.816217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.816263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.816283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:20.421 [2024-11-15 11:11:06.816294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:20.421 [2024-11-15 11:11:06.816308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.816839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.816866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:20.421 [2024-11-15 11:11:06.816878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.437 ms 00:27:20.421 [2024-11-15 11:11:06.816890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.816943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.816958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:20.421 [2024-11-15 11:11:06.816971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:20.421 [2024-11-15 11:11:06.816987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.836825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.421 [2024-11-15 11:11:06.836872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:20.421 [2024-11-15 11:11:06.836886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.850 ms 00:27:20.421 [2024-11-15 11:11:06.836900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.421 [2024-11-15 11:11:06.848816] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:20.421 [2024-11-15 11:11:06.849900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:06.849928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:20.422 [2024-11-15 11:11:06.849944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.912 ms 00:27:20.422 [2024-11-15 11:11:06.849955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:06.891099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:06.891141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:20.422 [2024-11-15 11:11:06.891160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.173 ms 00:27:20.422 [2024-11-15 11:11:06.891171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:06.891265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:06.891282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:20.422 [2024-11-15 11:11:06.891298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:20.422 [2024-11-15 11:11:06.891309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:06.926499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:06.926545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:20.422 [2024-11-15 11:11:06.926564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.187 ms 00:27:20.422 [2024-11-15 11:11:06.926575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:06.963178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:06.963215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:20.422 [2024-11-15 11:11:06.963233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.610 ms 00:27:20.422 [2024-11-15 11:11:06.963243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:06.963985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:06.964015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:20.422 [2024-11-15 11:11:06.964031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.700 ms 00:27:20.422 [2024-11-15 11:11:06.964041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.062147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:07.062199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:20.422 [2024-11-15 11:11:07.062223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.197 ms 00:27:20.422 [2024-11-15 11:11:07.062235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.099865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:07.099911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:20.422 [2024-11-15 11:11:07.099941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.598 ms 00:27:20.422 [2024-11-15 11:11:07.099952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.136441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:07.136483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:20.422 [2024-11-15 11:11:07.136502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.496 ms 00:27:20.422 [2024-11-15 11:11:07.136512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.172367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:07.172405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:20.422 [2024-11-15 11:11:07.172424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.858 ms 00:27:20.422 [2024-11-15 11:11:07.172434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.172484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:07.172497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:20.422 [2024-11-15 11:11:07.172514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:20.422 [2024-11-15 11:11:07.172534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.172637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.422 [2024-11-15 11:11:07.172650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:20.422 [2024-11-15 11:11:07.172667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:27:20.422 [2024-11-15 11:11:07.172677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.422 [2024-11-15 11:11:07.173707] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3392.489 ms, result 0 00:27:20.422 { 00:27:20.422 "name": "ftl", 00:27:20.422 "uuid": "076dd754-373f-4c28-ad91-8848dad9fbe6" 00:27:20.422 } 00:27:20.422 11:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:20.682 [2024-11-15 11:11:07.396576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.682 11:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:20.940 11:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:20.940 [2024-11-15 11:11:07.772318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:20.940 11:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:21.198 [2024-11-15 11:11:07.978174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:21.198 11:11:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:21.764 Fill FTL, iteration 1 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80902 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80902 /var/tmp/spdk.tgt.sock 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 80902 ']' 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:21.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:21.764 11:11:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:21.764 [2024-11-15 11:11:08.480471] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:27:21.764 [2024-11-15 11:11:08.480619] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80902 ] 00:27:22.022 [2024-11-15 11:11:08.652245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.022 [2024-11-15 11:11:08.770271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.957 11:11:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.957 11:11:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:22.957 11:11:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:23.215 ftln1 00:27:23.215 11:11:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:23.215 11:11:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80902 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80902 ']' 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80902 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80902 00:27:23.474 killing process with pid 80902 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80902' 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80902 00:27:23.474 11:11:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80902 00:27:26.008 11:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:26.008 11:11:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:26.008 [2024-11-15 11:11:12.619739] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:27:26.008 [2024-11-15 11:11:12.619861] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80955 ] 00:27:26.008 [2024-11-15 11:11:12.799553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.267 [2024-11-15 11:11:12.916319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.644  [2024-11-15T11:11:15.442Z] Copying: 243/1024 [MB] (243 MBps) [2024-11-15T11:11:16.439Z] Copying: 490/1024 [MB] (247 MBps) [2024-11-15T11:11:17.376Z] Copying: 733/1024 [MB] (243 MBps) [2024-11-15T11:11:17.633Z] Copying: 978/1024 [MB] (245 MBps) [2024-11-15T11:11:19.010Z] Copying: 1024/1024 [MB] (average 244 MBps) 00:27:32.149 00:27:32.149 Calculate MD5 checksum, iteration 1 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:32.149 11:11:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:32.149 [2024-11-15 11:11:18.785225] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:27:32.149 [2024-11-15 11:11:18.785554] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81019 ] 00:27:32.149 [2024-11-15 11:11:18.968219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.408 [2024-11-15 11:11:19.087083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.787  [2024-11-15T11:11:21.216Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-15T11:11:22.153Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:27:35.292 00:27:35.292 11:11:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:35.292 11:11:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:37.198 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:37.198 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=9d2a935a42812b0ee9a63c5a30a18a44 00:27:37.198 Fill FTL, iteration 2 00:27:37.198 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:37.198 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:37.198 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:37.199 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:37.199 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:37.199 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:37.199 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:37.199 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:37.199 11:11:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:37.199 [2024-11-15 11:11:23.840717] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:27:37.199 [2024-11-15 11:11:23.840971] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81069 ] 00:27:37.199 [2024-11-15 11:11:24.020713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.458 [2024-11-15 11:11:24.142377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.832  [2024-11-15T11:11:26.627Z] Copying: 246/1024 [MB] (246 MBps) [2024-11-15T11:11:28.023Z] Copying: 475/1024 [MB] (229 MBps) [2024-11-15T11:11:28.962Z] Copying: 700/1024 [MB] (225 MBps) [2024-11-15T11:11:29.221Z] Copying: 925/1024 [MB] (225 MBps) [2024-11-15T11:11:30.599Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:27:43.738 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:43.738 Calculate MD5 checksum, iteration 2 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:43.738 11:11:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:43.738 [2024-11-15 11:11:30.294507] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:27:43.738 [2024-11-15 11:11:30.294810] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81143 ] 00:27:43.738 [2024-11-15 11:11:30.473608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.738 [2024-11-15 11:11:30.596115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.641  [2024-11-15T11:11:33.071Z] Copying: 664/1024 [MB] (664 MBps) [2024-11-15T11:11:34.450Z] Copying: 1024/1024 [MB] (average 655 MBps) 00:27:47.589 00:27:47.589 11:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:47.589 11:11:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:49.490 11:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:49.490 11:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=7100ae26102d69dbf130a7dd090a6904 00:27:49.490 11:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:49.490 11:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:49.490 11:11:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:49.490 [2024-11-15 11:11:36.047774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.490 [2024-11-15 11:11:36.047834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:49.490 [2024-11-15 11:11:36.047851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:49.490 [2024-11-15 11:11:36.047863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.490 [2024-11-15 11:11:36.047892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.490 [2024-11-15 11:11:36.047903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:49.490 [2024-11-15 11:11:36.047914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.490 [2024-11-15 11:11:36.047929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.490 [2024-11-15 11:11:36.047950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.490 [2024-11-15 11:11:36.047961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:49.490 [2024-11-15 11:11:36.047972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:49.490 [2024-11-15 11:11:36.047982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.490 [2024-11-15 11:11:36.048042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:27:49.490 true 00:27:49.490 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:49.490 { 00:27:49.490 "name": "ftl", 00:27:49.490 "properties": [ 00:27:49.490 { 00:27:49.490 "name": "superblock_version", 00:27:49.490 "value": 5, 00:27:49.490 "read-only": true 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "name": "base_device", 00:27:49.490 "bands": [ 00:27:49.490 { 00:27:49.490 "id": 0, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 1, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 2, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 3, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 4, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 5, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 6, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 7, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 8, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 9, 00:27:49.490 "state": "FREE", 00:27:49.490 "validity": 0.0 00:27:49.490 }, 00:27:49.490 { 00:27:49.490 "id": 10, 00:27:49.490 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 11, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 12, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 13, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 14, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 15, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 16, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 17, 00:27:49.491 "state": "FREE", 00:27:49.491 "validity": 0.0 00:27:49.491 } 00:27:49.491 ], 00:27:49.491 "read-only": true 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "name": "cache_device", 00:27:49.491 "type": "bdev", 00:27:49.491 "chunks": [ 00:27:49.491 { 00:27:49.491 "id": 0, 00:27:49.491 "state": "INACTIVE", 00:27:49.491 "utilization": 0.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 1, 00:27:49.491 "state": "CLOSED", 00:27:49.491 "utilization": 1.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 2, 00:27:49.491 "state": "CLOSED", 00:27:49.491 "utilization": 1.0 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 3, 00:27:49.491 "state": "OPEN", 00:27:49.491 "utilization": 0.001953125 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "id": 4, 00:27:49.491 "state": "OPEN", 00:27:49.491 "utilization": 0.0 00:27:49.491 } 00:27:49.491 ], 00:27:49.491 "read-only": true 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "name": "verbose_mode", 00:27:49.491 "value": true, 00:27:49.491 "unit": "", 00:27:49.491 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:49.491 }, 00:27:49.491 { 00:27:49.491 "name": "prep_upgrade_on_shutdown", 00:27:49.491 "value": false, 00:27:49.491 "unit": "", 00:27:49.491 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:49.491 } 00:27:49.491 ] 00:27:49.491 } 00:27:49.491 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:49.750 [2024-11-15 11:11:36.468752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.750 [2024-11-15 11:11:36.468804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:49.750 [2024-11-15 11:11:36.468821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:49.750 [2024-11-15 11:11:36.468831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.750 [2024-11-15 11:11:36.468858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.750 [2024-11-15 11:11:36.468870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:49.750 [2024-11-15 11:11:36.468881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:49.750 [2024-11-15 11:11:36.468891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.750 [2024-11-15 11:11:36.468911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:49.750 [2024-11-15 11:11:36.468923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:49.750 [2024-11-15 11:11:36.468934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:49.750 [2024-11-15 11:11:36.468943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:49.750 [2024-11-15 11:11:36.469004] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.246 ms, result 0 00:27:49.750 true 00:27:49.750 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:49.750 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:49.750 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:50.008 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:50.008 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:50.008 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:50.008 [2024-11-15 11:11:36.860736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.008 [2024-11-15 11:11:36.860980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:50.008 [2024-11-15 11:11:36.861008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:50.008 [2024-11-15 11:11:36.861020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.008 [2024-11-15 11:11:36.861067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.008 [2024-11-15 11:11:36.861080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:50.008 [2024-11-15 11:11:36.861091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:50.008 [2024-11-15 11:11:36.861102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.008 [2024-11-15 11:11:36.861123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:50.008 [2024-11-15 11:11:36.861134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:50.008 [2024-11-15 11:11:36.861145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:50.008 [2024-11-15 11:11:36.861156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:50.008 [2024-11-15 11:11:36.861222] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.476 ms, result 0 00:27:50.008 true 00:27:50.267 11:11:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:50.267 { 00:27:50.268 "name": "ftl", 00:27:50.268 "properties": [ 00:27:50.268 { 00:27:50.268 "name": "superblock_version", 00:27:50.268 "value": 5, 00:27:50.268 "read-only": true 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "name": "base_device", 00:27:50.268 "bands": [ 00:27:50.268 { 00:27:50.268 "id": 0, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 1, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 2, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 3, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 4, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 5, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 6, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 7, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 8, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 9, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 10, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 11, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 12, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 13, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 14, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 15, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 16, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 17, 00:27:50.268 "state": "FREE", 00:27:50.268 "validity": 0.0 00:27:50.268 } 00:27:50.268 ], 00:27:50.268 "read-only": true 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "name": "cache_device", 00:27:50.268 "type": "bdev", 00:27:50.268 "chunks": [ 00:27:50.268 { 00:27:50.268 "id": 0, 00:27:50.268 "state": "INACTIVE", 00:27:50.268 "utilization": 0.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 1, 00:27:50.268 "state": "CLOSED", 00:27:50.268 "utilization": 1.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 2, 00:27:50.268 "state": "CLOSED", 00:27:50.268 "utilization": 1.0 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 3, 00:27:50.268 "state": "OPEN", 00:27:50.268 "utilization": 0.001953125 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "id": 4, 00:27:50.268 "state": "OPEN", 00:27:50.268 "utilization": 0.0 00:27:50.268 } 00:27:50.268 ], 00:27:50.268 "read-only": true 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "name": "verbose_mode", 00:27:50.268 "value": true, 00:27:50.268 "unit": "", 00:27:50.268 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:50.268 }, 00:27:50.268 { 00:27:50.268 "name": "prep_upgrade_on_shutdown", 00:27:50.268 "value": true, 00:27:50.268 "unit": "", 00:27:50.268 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:50.268 } 00:27:50.268 ] 00:27:50.268 } 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80774 ]] 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80774 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 80774 ']' 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 80774 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:50.268 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80774 00:27:50.527 killing process with pid 80774 00:27:50.527 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:50.528 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:50.528 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80774' 00:27:50.528 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 80774 00:27:50.528 11:11:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 80774 00:27:51.464 [2024-11-15 11:11:38.259259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:51.464 [2024-11-15 11:11:38.278001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.464 [2024-11-15 11:11:38.278045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:51.464 [2024-11-15 11:11:38.278065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:51.464 [2024-11-15 11:11:38.278077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.464 [2024-11-15 11:11:38.278101] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:51.464 [2024-11-15 11:11:38.282337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.464 [2024-11-15 11:11:38.282367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:51.464 [2024-11-15 11:11:38.282380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.227 ms 00:27:51.464 [2024-11-15 11:11:38.282391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.468675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.468927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:59.613 [2024-11-15 11:11:45.468968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7197.921 ms 00:27:59.613 [2024-11-15 11:11:45.468984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.470169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.470204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:59.613 [2024-11-15 11:11:45.470216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.151 ms 00:27:59.613 [2024-11-15 11:11:45.470227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.471164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.471194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:59.613 [2024-11-15 11:11:45.471207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.906 ms 00:27:59.613 [2024-11-15 11:11:45.471217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.486339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.486486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:59.613 [2024-11-15 11:11:45.486652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.100 ms 00:27:59.613 [2024-11-15 11:11:45.486692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.495692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.495730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:59.613 [2024-11-15 11:11:45.495744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.953 ms 00:27:59.613 [2024-11-15 11:11:45.495756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.495858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.495872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:59.613 [2024-11-15 11:11:45.495884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:27:59.613 [2024-11-15 11:11:45.495901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.510275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.510312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:59.613 [2024-11-15 11:11:45.510326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.379 ms 00:27:59.613 [2024-11-15 11:11:45.510337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.524989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.525026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:59.613 [2024-11-15 11:11:45.525040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.638 ms 00:27:59.613 [2024-11-15 11:11:45.525050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.539182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.539234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:59.613 [2024-11-15 11:11:45.539248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.115 ms 00:27:59.613 [2024-11-15 11:11:45.539257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.553184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.553336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:59.613 [2024-11-15 11:11:45.553359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.863 ms 00:27:59.613 [2024-11-15 11:11:45.553369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.613 [2024-11-15 11:11:45.553412] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:59.613 [2024-11-15 11:11:45.553430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:59.613 [2024-11-15 11:11:45.553444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:59.613 [2024-11-15 11:11:45.553471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:59.613 [2024-11-15 11:11:45.553483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:59.613 [2024-11-15 11:11:45.553676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:59.613 [2024-11-15 11:11:45.553687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 076dd754-373f-4c28-ad91-8848dad9fbe6 00:27:59.613 [2024-11-15 11:11:45.553698] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:59.613 [2024-11-15 11:11:45.553708] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:59.613 [2024-11-15 11:11:45.553718] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:59.613 [2024-11-15 11:11:45.553729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:59.613 [2024-11-15 11:11:45.553739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:59.613 [2024-11-15 11:11:45.553750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:59.613 [2024-11-15 11:11:45.553766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:59.613 [2024-11-15 11:11:45.553775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:59.613 [2024-11-15 11:11:45.553785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:59.613 [2024-11-15 11:11:45.553796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.613 [2024-11-15 11:11:45.553807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:59.613 [2024-11-15 11:11:45.553821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.387 ms 00:27:59.614 [2024-11-15 11:11:45.553831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.573586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.614 [2024-11-15 11:11:45.573623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:59.614 [2024-11-15 11:11:45.573636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.753 ms 00:27:59.614 [2024-11-15 11:11:45.573648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.574182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.614 [2024-11-15 11:11:45.574194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:59.614 [2024-11-15 11:11:45.574205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.502 ms 00:27:59.614 [2024-11-15 11:11:45.574215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.639901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.639974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:59.614 [2024-11-15 11:11:45.639989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.640006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.640056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.640068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:59.614 [2024-11-15 11:11:45.640078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.640088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.640202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.640216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:59.614 [2024-11-15 11:11:45.640227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.640238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.640262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.640273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:59.614 [2024-11-15 11:11:45.640284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.640294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.766120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.766381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:59.614 [2024-11-15 11:11:45.766406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.766417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.866393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.866461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:59.614 [2024-11-15 11:11:45.866485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.866496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.866633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.866648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:59.614 [2024-11-15 11:11:45.866659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.866670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.866724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.866743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:59.614 [2024-11-15 11:11:45.866753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.866763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.866882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.866896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:59.614 [2024-11-15 11:11:45.866906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.866916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.866952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.866964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:59.614 [2024-11-15 11:11:45.866979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.866989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.867038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.867050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:59.614 [2024-11-15 11:11:45.867060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.867069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.867113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:59.614 [2024-11-15 11:11:45.867129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:59.614 [2024-11-15 11:11:45.867139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:59.614 [2024-11-15 11:11:45.867149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.614 [2024-11-15 11:11:45.867280] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7601.565 ms, result 0 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81338 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81338 00:28:02.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81338 ']' 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:02.903 11:11:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:02.903 [2024-11-15 11:11:49.221644] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:02.903 [2024-11-15 11:11:49.221915] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81338 ] 00:28:02.903 [2024-11-15 11:11:49.401604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.903 [2024-11-15 11:11:49.517462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.838 [2024-11-15 11:11:50.490573] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.838 [2024-11-15 11:11:50.490648] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:03.838 [2024-11-15 11:11:50.639217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.639292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:03.838 [2024-11-15 11:11:50.639312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:03.838 [2024-11-15 11:11:50.639326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.639400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.639420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:03.838 [2024-11-15 11:11:50.639439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:03.838 [2024-11-15 11:11:50.639453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.639491] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:03.838 [2024-11-15 11:11:50.640519] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:03.838 [2024-11-15 11:11:50.640570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.640589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:03.838 [2024-11-15 11:11:50.640608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.093 ms 00:28:03.838 [2024-11-15 11:11:50.640625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.642322] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:03.838 [2024-11-15 11:11:50.661168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.661219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:03.838 [2024-11-15 11:11:50.661245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.877 ms 00:28:03.838 [2024-11-15 11:11:50.661259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.661332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.661349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:03.838 [2024-11-15 11:11:50.661363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:03.838 [2024-11-15 11:11:50.661376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.669114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.669169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:03.838 [2024-11-15 11:11:50.669186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.641 ms 00:28:03.838 [2024-11-15 11:11:50.669199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.669284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.669301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:03.838 [2024-11-15 11:11:50.669315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:28:03.838 [2024-11-15 11:11:50.669328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.669391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.669411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:03.838 [2024-11-15 11:11:50.669434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:03.838 [2024-11-15 11:11:50.669451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.669487] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:03.838 [2024-11-15 11:11:50.674646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.674684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:03.838 [2024-11-15 11:11:50.674699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.176 ms 00:28:03.838 [2024-11-15 11:11:50.674717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.674751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.674765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:03.838 [2024-11-15 11:11:50.674781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:03.838 [2024-11-15 11:11:50.674796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.674879] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:03.838 [2024-11-15 11:11:50.674908] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:03.838 [2024-11-15 11:11:50.674951] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:03.838 [2024-11-15 11:11:50.674977] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:03.838 [2024-11-15 11:11:50.675085] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:03.838 [2024-11-15 11:11:50.675102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:03.838 [2024-11-15 11:11:50.675119] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:03.838 [2024-11-15 11:11:50.675135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:03.838 [2024-11-15 11:11:50.675150] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:03.838 [2024-11-15 11:11:50.675168] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:03.838 [2024-11-15 11:11:50.675181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:03.838 [2024-11-15 11:11:50.675194] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:03.838 [2024-11-15 11:11:50.675207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:03.838 [2024-11-15 11:11:50.675221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.675237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:03.838 [2024-11-15 11:11:50.675254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.345 ms 00:28:03.838 [2024-11-15 11:11:50.675269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.675360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.838 [2024-11-15 11:11:50.675376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:03.838 [2024-11-15 11:11:50.675389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:28:03.838 [2024-11-15 11:11:50.675409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.838 [2024-11-15 11:11:50.675541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:03.838 [2024-11-15 11:11:50.675559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:03.838 [2024-11-15 11:11:50.675573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.839 [2024-11-15 11:11:50.675587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.675600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:03.839 [2024-11-15 11:11:50.675612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.675624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:03.839 [2024-11-15 11:11:50.675637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:03.839 [2024-11-15 11:11:50.675666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:03.839 [2024-11-15 11:11:50.675679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.675694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:03.839 [2024-11-15 11:11:50.675710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:03.839 [2024-11-15 11:11:50.675730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.675744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:03.839 [2024-11-15 11:11:50.675761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:03.839 [2024-11-15 11:11:50.675774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.675787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:03.839 [2024-11-15 11:11:50.675800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:03.839 [2024-11-15 11:11:50.675815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.675827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:03.839 [2024-11-15 11:11:50.675839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:03.839 [2024-11-15 11:11:50.675851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.839 [2024-11-15 11:11:50.675863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:03.839 [2024-11-15 11:11:50.675878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:03.839 [2024-11-15 11:11:50.675893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.839 [2024-11-15 11:11:50.675922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:03.839 [2024-11-15 11:11:50.675939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:03.839 [2024-11-15 11:11:50.675953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.839 [2024-11-15 11:11:50.675965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:03.839 [2024-11-15 11:11:50.675977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:03.839 [2024-11-15 11:11:50.675989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:03.839 [2024-11-15 11:11:50.676000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:03.839 [2024-11-15 11:11:50.676013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:03.839 [2024-11-15 11:11:50.676025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.676036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:03.839 [2024-11-15 11:11:50.676049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:03.839 [2024-11-15 11:11:50.676061] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.676072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:03.839 [2024-11-15 11:11:50.676084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:03.839 [2024-11-15 11:11:50.676096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.676108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:03.839 [2024-11-15 11:11:50.676119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:03.839 [2024-11-15 11:11:50.676131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.676146] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:03.839 [2024-11-15 11:11:50.676166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:03.839 [2024-11-15 11:11:50.676181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:03.839 [2024-11-15 11:11:50.676197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:03.839 [2024-11-15 11:11:50.676219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:03.839 [2024-11-15 11:11:50.676233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:03.839 [2024-11-15 11:11:50.676246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:03.839 [2024-11-15 11:11:50.676258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:03.839 [2024-11-15 11:11:50.676270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:03.839 [2024-11-15 11:11:50.676283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:03.839 [2024-11-15 11:11:50.676297] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:03.839 [2024-11-15 11:11:50.676313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:03.839 [2024-11-15 11:11:50.676349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:03.839 [2024-11-15 11:11:50.676402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:03.839 [2024-11-15 11:11:50.676415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:03.839 [2024-11-15 11:11:50.676429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:03.839 [2024-11-15 11:11:50.676442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:03.839 [2024-11-15 11:11:50.676549] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:03.839 [2024-11-15 11:11:50.676564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:03.839 [2024-11-15 11:11:50.676596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:03.839 [2024-11-15 11:11:50.676613] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:03.839 [2024-11-15 11:11:50.676627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:03.839 [2024-11-15 11:11:50.676647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:03.839 [2024-11-15 11:11:50.676666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:03.839 [2024-11-15 11:11:50.676681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.181 ms 00:28:03.839 [2024-11-15 11:11:50.676694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:03.839 [2024-11-15 11:11:50.676754] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:03.839 [2024-11-15 11:11:50.676775] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:07.126 [2024-11-15 11:11:53.774570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.774636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:07.126 [2024-11-15 11:11:53.774655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3102.844 ms 00:28:07.126 [2024-11-15 11:11:53.774666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.813484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.813552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:07.126 [2024-11-15 11:11:53.813597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.600 ms 00:28:07.126 [2024-11-15 11:11:53.813609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.813742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.813763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:07.126 [2024-11-15 11:11:53.813776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:07.126 [2024-11-15 11:11:53.813786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.859716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.859764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:07.126 [2024-11-15 11:11:53.859779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.954 ms 00:28:07.126 [2024-11-15 11:11:53.859794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.859849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.859862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:07.126 [2024-11-15 11:11:53.859873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:07.126 [2024-11-15 11:11:53.859883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.860367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.860382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:07.126 [2024-11-15 11:11:53.860393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.406 ms 00:28:07.126 [2024-11-15 11:11:53.860403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.860452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.860463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:07.126 [2024-11-15 11:11:53.860474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:07.126 [2024-11-15 11:11:53.860485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.881529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.881579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:07.126 [2024-11-15 11:11:53.881595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.047 ms 00:28:07.126 [2024-11-15 11:11:53.881605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.900865] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:07.126 [2024-11-15 11:11:53.900906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:07.126 [2024-11-15 11:11:53.900921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.900932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:07.126 [2024-11-15 11:11:53.900944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.205 ms 00:28:07.126 [2024-11-15 11:11:53.900954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.920592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.920628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:07.126 [2024-11-15 11:11:53.920641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.639 ms 00:28:07.126 [2024-11-15 11:11:53.920653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.938405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.938439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:07.126 [2024-11-15 11:11:53.938453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.735 ms 00:28:07.126 [2024-11-15 11:11:53.938463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.955812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.955956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:07.126 [2024-11-15 11:11:53.955976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.350 ms 00:28:07.126 [2024-11-15 11:11:53.955986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.126 [2024-11-15 11:11:53.956744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.126 [2024-11-15 11:11:53.956772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:07.126 [2024-11-15 11:11:53.956784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.665 ms 00:28:07.126 [2024-11-15 11:11:53.956794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.051609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.051665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:07.385 [2024-11-15 11:11:54.051682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 94.943 ms 00:28:07.385 [2024-11-15 11:11:54.051694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.062831] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:07.385 [2024-11-15 11:11:54.063870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.063899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:07.385 [2024-11-15 11:11:54.063914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.122 ms 00:28:07.385 [2024-11-15 11:11:54.063925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.064020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.064038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:07.385 [2024-11-15 11:11:54.064049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:07.385 [2024-11-15 11:11:54.064060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.064123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.064135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:07.385 [2024-11-15 11:11:54.064146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:07.385 [2024-11-15 11:11:54.064156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.064180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.064191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:07.385 [2024-11-15 11:11:54.064202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:07.385 [2024-11-15 11:11:54.064215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.064252] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:07.385 [2024-11-15 11:11:54.064265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.064275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:07.385 [2024-11-15 11:11:54.064285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:07.385 [2024-11-15 11:11:54.064295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.100131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.385 [2024-11-15 11:11:54.100289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:07.385 [2024-11-15 11:11:54.100311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.872 ms 00:28:07.385 [2024-11-15 11:11:54.100322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.385 [2024-11-15 11:11:54.100425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.386 [2024-11-15 11:11:54.100439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:07.386 [2024-11-15 11:11:54.100451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:07.386 [2024-11-15 11:11:54.100460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.386 [2024-11-15 11:11:54.101605] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3467.551 ms, result 0 00:28:07.386 [2024-11-15 11:11:54.116619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.386 [2024-11-15 11:11:54.132597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:07.386 [2024-11-15 11:11:54.141570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:07.386 11:11:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:07.386 11:11:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:07.386 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:07.386 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:07.386 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:07.645 [2024-11-15 11:11:54.365253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.645 [2024-11-15 11:11:54.365314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:07.645 [2024-11-15 11:11:54.365330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:07.645 [2024-11-15 11:11:54.365361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.645 [2024-11-15 11:11:54.365388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.645 [2024-11-15 11:11:54.365399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:07.645 [2024-11-15 11:11:54.365410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.645 [2024-11-15 11:11:54.365420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.645 [2024-11-15 11:11:54.365441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.645 [2024-11-15 11:11:54.365452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:07.645 [2024-11-15 11:11:54.365463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:07.645 [2024-11-15 11:11:54.365473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.645 [2024-11-15 11:11:54.365537] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.276 ms, result 0 00:28:07.645 true 00:28:07.645 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:07.904 { 00:28:07.904 "name": "ftl", 00:28:07.904 "properties": [ 00:28:07.904 { 00:28:07.904 "name": "superblock_version", 00:28:07.904 "value": 5, 00:28:07.904 "read-only": true 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "name": "base_device", 00:28:07.904 "bands": [ 00:28:07.904 { 00:28:07.904 "id": 0, 00:28:07.904 "state": "CLOSED", 00:28:07.904 "validity": 1.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 1, 00:28:07.904 "state": "CLOSED", 00:28:07.904 "validity": 1.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 2, 00:28:07.904 "state": "CLOSED", 00:28:07.904 "validity": 0.007843137254901933 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 3, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 4, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 5, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 6, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 7, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 8, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 9, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 10, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 11, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 12, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 13, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 14, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 15, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 16, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 17, 00:28:07.904 "state": "FREE", 00:28:07.904 "validity": 0.0 00:28:07.904 } 00:28:07.904 ], 00:28:07.904 "read-only": true 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "name": "cache_device", 00:28:07.904 "type": "bdev", 00:28:07.904 "chunks": [ 00:28:07.904 { 00:28:07.904 "id": 0, 00:28:07.904 "state": "INACTIVE", 00:28:07.904 "utilization": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 1, 00:28:07.904 "state": "OPEN", 00:28:07.904 "utilization": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 2, 00:28:07.904 "state": "OPEN", 00:28:07.904 "utilization": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 3, 00:28:07.904 "state": "FREE", 00:28:07.904 "utilization": 0.0 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "id": 4, 00:28:07.904 "state": "FREE", 00:28:07.904 "utilization": 0.0 00:28:07.904 } 00:28:07.904 ], 00:28:07.904 "read-only": true 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "name": "verbose_mode", 00:28:07.904 "value": true, 00:28:07.904 "unit": "", 00:28:07.904 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:07.904 }, 00:28:07.904 { 00:28:07.904 "name": "prep_upgrade_on_shutdown", 00:28:07.904 "value": false, 00:28:07.904 "unit": "", 00:28:07.904 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:07.904 } 00:28:07.905 ] 00:28:07.905 } 00:28:07.905 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:07.905 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:07.905 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:08.163 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:08.163 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:08.163 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:08.163 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:08.163 11:11:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:08.422 Validate MD5 checksum, iteration 1 00:28:08.422 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:08.422 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:08.422 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:08.422 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:08.422 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:08.422 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:08.423 11:11:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:08.423 [2024-11-15 11:11:55.152634] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:08.423 [2024-11-15 11:11:55.152916] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81409 ] 00:28:08.682 [2024-11-15 11:11:55.334546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.682 [2024-11-15 11:11:55.450449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.585  [2024-11-15T11:11:57.706Z] Copying: 706/1024 [MB] (706 MBps) [2024-11-15T11:11:59.085Z] Copying: 1024/1024 [MB] (average 692 MBps) 00:28:12.224 00:28:12.224 11:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:12.224 11:11:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9d2a935a42812b0ee9a63c5a30a18a44 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9d2a935a42812b0ee9a63c5a30a18a44 != \9\d\2\a\9\3\5\a\4\2\8\1\2\b\0\e\e\9\a\6\3\c\5\a\3\0\a\1\8\a\4\4 ]] 00:28:14.130 Validate MD5 checksum, iteration 2 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:14.130 11:12:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:14.130 [2024-11-15 11:12:00.925838] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:14.130 [2024-11-15 11:12:00.926152] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81475 ] 00:28:14.389 [2024-11-15 11:12:01.106971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.389 [2024-11-15 11:12:01.222397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.291  [2024-11-15T11:12:03.718Z] Copying: 676/1024 [MB] (676 MBps) [2024-11-15T11:12:07.028Z] Copying: 1024/1024 [MB] (average 664 MBps) 00:28:20.167 00:28:20.168 11:12:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:20.168 11:12:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7100ae26102d69dbf130a7dd090a6904 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7100ae26102d69dbf130a7dd090a6904 != \7\1\0\0\a\e\2\6\1\0\2\d\6\9\d\b\f\1\3\0\a\7\d\d\0\9\0\a\6\9\0\4 ]] 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81338 ]] 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81338 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81553 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81553 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81553 ']' 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.541 11:12:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:21.800 [2024-11-15 11:12:08.404903] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:21.800 [2024-11-15 11:12:08.405086] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81553 ] 00:28:21.800 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81338 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:21.800 [2024-11-15 11:12:08.590217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.059 [2024-11-15 11:12:08.721387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.994 [2024-11-15 11:12:09.738698] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:22.994 [2024-11-15 11:12:09.738787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:23.254 [2024-11-15 11:12:09.887329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.887394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:23.254 [2024-11-15 11:12:09.887412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:23.254 [2024-11-15 11:12:09.887423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.887485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.887499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:23.254 [2024-11-15 11:12:09.887511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:28:23.254 [2024-11-15 11:12:09.887540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.887574] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:23.254 [2024-11-15 11:12:09.888657] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:23.254 [2024-11-15 11:12:09.888857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.888875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:23.254 [2024-11-15 11:12:09.888888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.295 ms 00:28:23.254 [2024-11-15 11:12:09.888899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.889384] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:23.254 [2024-11-15 11:12:09.916011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.916058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:23.254 [2024-11-15 11:12:09.916075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.669 ms 00:28:23.254 [2024-11-15 11:12:09.916087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.931594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.931634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:23.254 [2024-11-15 11:12:09.931652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:28:23.254 [2024-11-15 11:12:09.931663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.932175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.932192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:23.254 [2024-11-15 11:12:09.932204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:28:23.254 [2024-11-15 11:12:09.932214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.932276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.932293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:23.254 [2024-11-15 11:12:09.932304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:28:23.254 [2024-11-15 11:12:09.932330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.932360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.932371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:23.254 [2024-11-15 11:12:09.932383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:23.254 [2024-11-15 11:12:09.932393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.932421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:23.254 [2024-11-15 11:12:09.937203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.937239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:23.254 [2024-11-15 11:12:09.937253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.796 ms 00:28:23.254 [2024-11-15 11:12:09.937264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.937297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.937308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:23.254 [2024-11-15 11:12:09.937320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:23.254 [2024-11-15 11:12:09.937331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.254 [2024-11-15 11:12:09.937375] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:23.254 [2024-11-15 11:12:09.937398] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:23.254 [2024-11-15 11:12:09.937437] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:23.254 [2024-11-15 11:12:09.937459] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:23.254 [2024-11-15 11:12:09.937572] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:23.254 [2024-11-15 11:12:09.937602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:23.254 [2024-11-15 11:12:09.937617] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:23.254 [2024-11-15 11:12:09.937631] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:23.254 [2024-11-15 11:12:09.937644] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:23.254 [2024-11-15 11:12:09.937656] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:23.254 [2024-11-15 11:12:09.937666] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:23.254 [2024-11-15 11:12:09.937677] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:23.254 [2024-11-15 11:12:09.937688] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:23.254 [2024-11-15 11:12:09.937701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.254 [2024-11-15 11:12:09.937717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:23.254 [2024-11-15 11:12:09.937729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:28:23.254 [2024-11-15 11:12:09.937739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:09.937820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:09.937832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:23.255 [2024-11-15 11:12:09.937844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:28:23.255 [2024-11-15 11:12:09.937854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:09.937962] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:23.255 [2024-11-15 11:12:09.937975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:23.255 [2024-11-15 11:12:09.937990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:23.255 [2024-11-15 11:12:09.938022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:23.255 [2024-11-15 11:12:09.938042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:23.255 [2024-11-15 11:12:09.938052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:23.255 [2024-11-15 11:12:09.938062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:23.255 [2024-11-15 11:12:09.938082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:23.255 [2024-11-15 11:12:09.938092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:23.255 [2024-11-15 11:12:09.938111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:23.255 [2024-11-15 11:12:09.938121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:23.255 [2024-11-15 11:12:09.938141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:23.255 [2024-11-15 11:12:09.938150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:23.255 [2024-11-15 11:12:09.938169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:23.255 [2024-11-15 11:12:09.938179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:23.255 [2024-11-15 11:12:09.938210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:23.255 [2024-11-15 11:12:09.938220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:23.255 [2024-11-15 11:12:09.938240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:23.255 [2024-11-15 11:12:09.938249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:23.255 [2024-11-15 11:12:09.938268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:23.255 [2024-11-15 11:12:09.938278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:23.255 [2024-11-15 11:12:09.938305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:23.255 [2024-11-15 11:12:09.938315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:23.255 [2024-11-15 11:12:09.938334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:23.255 [2024-11-15 11:12:09.938363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:23.255 [2024-11-15 11:12:09.938391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:23.255 [2024-11-15 11:12:09.938402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938411] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:23.255 [2024-11-15 11:12:09.938422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:23.255 [2024-11-15 11:12:09.938432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:23.255 [2024-11-15 11:12:09.938457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:23.255 [2024-11-15 11:12:09.938468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:23.255 [2024-11-15 11:12:09.938477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:23.255 [2024-11-15 11:12:09.938487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:23.255 [2024-11-15 11:12:09.938496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:23.255 [2024-11-15 11:12:09.938506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:23.255 [2024-11-15 11:12:09.938517] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:23.255 [2024-11-15 11:12:09.938531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:23.255 [2024-11-15 11:12:09.938567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:23.255 [2024-11-15 11:12:09.938617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:23.255 [2024-11-15 11:12:09.938628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:23.255 [2024-11-15 11:12:09.938639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:23.255 [2024-11-15 11:12:09.938650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:23.255 [2024-11-15 11:12:09.938753] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:23.255 [2024-11-15 11:12:09.938766] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:23.255 [2024-11-15 11:12:09.938789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:23.255 [2024-11-15 11:12:09.938800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:23.255 [2024-11-15 11:12:09.938813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:23.255 [2024-11-15 11:12:09.938826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:09.938841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:23.255 [2024-11-15 11:12:09.938852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.923 ms 00:28:23.255 [2024-11-15 11:12:09.938863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:09.979007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:09.979199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:23.255 [2024-11-15 11:12:09.979224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.145 ms 00:28:23.255 [2024-11-15 11:12:09.979237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:09.979293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:09.979305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:23.255 [2024-11-15 11:12:09.979317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:23.255 [2024-11-15 11:12:09.979328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:10.029160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:10.029208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:23.255 [2024-11-15 11:12:10.029223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.821 ms 00:28:23.255 [2024-11-15 11:12:10.029236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:10.029290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:10.029303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:23.255 [2024-11-15 11:12:10.029315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:23.255 [2024-11-15 11:12:10.029331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.255 [2024-11-15 11:12:10.029465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.255 [2024-11-15 11:12:10.029479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:23.255 [2024-11-15 11:12:10.029492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:28:23.256 [2024-11-15 11:12:10.029502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.256 [2024-11-15 11:12:10.029572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.256 [2024-11-15 11:12:10.029596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:23.256 [2024-11-15 11:12:10.029608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:28:23.256 [2024-11-15 11:12:10.029619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.256 [2024-11-15 11:12:10.051166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.256 [2024-11-15 11:12:10.051211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:23.256 [2024-11-15 11:12:10.051227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.551 ms 00:28:23.256 [2024-11-15 11:12:10.051239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.256 [2024-11-15 11:12:10.051401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.256 [2024-11-15 11:12:10.051419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:23.256 [2024-11-15 11:12:10.051432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:23.256 [2024-11-15 11:12:10.051443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.256 [2024-11-15 11:12:10.087356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.256 [2024-11-15 11:12:10.087415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:23.256 [2024-11-15 11:12:10.087431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.944 ms 00:28:23.256 [2024-11-15 11:12:10.087442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.256 [2024-11-15 11:12:10.103881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.256 [2024-11-15 11:12:10.103923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:23.256 [2024-11-15 11:12:10.103947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.691 ms 00:28:23.256 [2024-11-15 11:12:10.103959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.514 [2024-11-15 11:12:10.198834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.514 [2024-11-15 11:12:10.199026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:23.514 [2024-11-15 11:12:10.199060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 94.931 ms 00:28:23.514 [2024-11-15 11:12:10.199072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.514 [2024-11-15 11:12:10.199310] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:23.514 [2024-11-15 11:12:10.199455] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:23.514 [2024-11-15 11:12:10.199617] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:23.514 [2024-11-15 11:12:10.199750] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:23.514 [2024-11-15 11:12:10.199785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.514 [2024-11-15 11:12:10.199797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:23.514 [2024-11-15 11:12:10.199809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.610 ms 00:28:23.514 [2024-11-15 11:12:10.199820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.514 [2024-11-15 11:12:10.199919] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:23.514 [2024-11-15 11:12:10.199935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.514 [2024-11-15 11:12:10.199952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:23.514 [2024-11-15 11:12:10.199964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:23.514 [2024-11-15 11:12:10.199991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.514 [2024-11-15 11:12:10.224699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.514 [2024-11-15 11:12:10.224867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:23.514 [2024-11-15 11:12:10.224892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.720 ms 00:28:23.514 [2024-11-15 11:12:10.224905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.514 [2024-11-15 11:12:10.240622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.514 [2024-11-15 11:12:10.240665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:23.514 [2024-11-15 11:12:10.240679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:23.514 [2024-11-15 11:12:10.240691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.514 [2024-11-15 11:12:10.240802] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:23.514 [2024-11-15 11:12:10.241002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.514 [2024-11-15 11:12:10.241017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:23.514 [2024-11-15 11:12:10.241029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:28:23.514 [2024-11-15 11:12:10.241040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.080 [2024-11-15 11:12:10.812756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.080 [2024-11-15 11:12:10.812823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:24.080 [2024-11-15 11:12:10.812843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 571.277 ms 00:28:24.080 [2024-11-15 11:12:10.812855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.080 [2024-11-15 11:12:10.819198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.080 [2024-11-15 11:12:10.819387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:24.080 [2024-11-15 11:12:10.819413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.314 ms 00:28:24.080 [2024-11-15 11:12:10.819426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.080 [2024-11-15 11:12:10.819910] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:24.080 [2024-11-15 11:12:10.819936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.080 [2024-11-15 11:12:10.819948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:24.080 [2024-11-15 11:12:10.819961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:28:24.080 [2024-11-15 11:12:10.819973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.080 [2024-11-15 11:12:10.820008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.080 [2024-11-15 11:12:10.820021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:24.080 [2024-11-15 11:12:10.820033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:24.080 [2024-11-15 11:12:10.820044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.080 [2024-11-15 11:12:10.820090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 580.230 ms, result 0 00:28:24.080 [2024-11-15 11:12:10.820138] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:24.080 [2024-11-15 11:12:10.820250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.080 [2024-11-15 11:12:10.820277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:24.080 [2024-11-15 11:12:10.820288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.114 ms 00:28:24.080 [2024-11-15 11:12:10.820298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.743 [2024-11-15 11:12:11.372167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.743 [2024-11-15 11:12:11.372238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:24.743 [2024-11-15 11:12:11.372257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 551.404 ms 00:28:24.743 [2024-11-15 11:12:11.372269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.743 [2024-11-15 11:12:11.378823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.743 [2024-11-15 11:12:11.379009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:24.743 [2024-11-15 11:12:11.379032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.186 ms 00:28:24.743 [2024-11-15 11:12:11.379044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.743 [2024-11-15 11:12:11.379555] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:24.743 [2024-11-15 11:12:11.379588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.743 [2024-11-15 11:12:11.379603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:24.743 [2024-11-15 11:12:11.379618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.503 ms 00:28:24.743 [2024-11-15 11:12:11.379632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.743 [2024-11-15 11:12:11.379671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.743 [2024-11-15 11:12:11.379687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:24.743 [2024-11-15 11:12:11.379702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:24.743 [2024-11-15 11:12:11.379715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.379761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 560.530 ms, result 0 00:28:24.744 [2024-11-15 11:12:11.379811] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:24.744 [2024-11-15 11:12:11.379828] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:24.744 [2024-11-15 11:12:11.379844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.379858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:24.744 [2024-11-15 11:12:11.379873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1140.914 ms 00:28:24.744 [2024-11-15 11:12:11.379887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.379950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.379969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:24.744 [2024-11-15 11:12:11.379990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:24.744 [2024-11-15 11:12:11.380004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.392820] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:24.744 [2024-11-15 11:12:11.392980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.392995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:24.744 [2024-11-15 11:12:11.393008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.975 ms 00:28:24.744 [2024-11-15 11:12:11.393019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.393682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.393707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:24.744 [2024-11-15 11:12:11.393724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.571 ms 00:28:24.744 [2024-11-15 11:12:11.393735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.395727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.395753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:24.744 [2024-11-15 11:12:11.395765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.970 ms 00:28:24.744 [2024-11-15 11:12:11.395775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.395824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.395837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:24.744 [2024-11-15 11:12:11.395848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:24.744 [2024-11-15 11:12:11.395861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.395962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.395974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:24.744 [2024-11-15 11:12:11.395985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:24.744 [2024-11-15 11:12:11.395995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.396017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.396028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:24.744 [2024-11-15 11:12:11.396039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:24.744 [2024-11-15 11:12:11.396049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.396082] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:24.744 [2024-11-15 11:12:11.396097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.396107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:24.744 [2024-11-15 11:12:11.396118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:24.744 [2024-11-15 11:12:11.396128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.396181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:24.744 [2024-11-15 11:12:11.396192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:24.744 [2024-11-15 11:12:11.396203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:24.744 [2024-11-15 11:12:11.396213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:24.744 [2024-11-15 11:12:11.397263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1511.880 ms, result 0 00:28:24.744 [2024-11-15 11:12:11.409636] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.744 [2024-11-15 11:12:11.425613] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:24.744 [2024-11-15 11:12:11.435153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:24.744 Validate MD5 checksum, iteration 1 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:24.744 11:12:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:24.744 [2024-11-15 11:12:11.569466] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:24.744 [2024-11-15 11:12:11.569873] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81593 ] 00:28:25.003 [2024-11-15 11:12:11.766981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.261 [2024-11-15 11:12:11.881616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.164  [2024-11-15T11:12:14.283Z] Copying: 675/1024 [MB] (675 MBps) [2024-11-15T11:12:16.818Z] Copying: 1024/1024 [MB] (average 672 MBps) 00:28:29.957 00:28:30.216 11:12:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:30.216 11:12:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:32.131 Validate MD5 checksum, iteration 2 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=9d2a935a42812b0ee9a63c5a30a18a44 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 9d2a935a42812b0ee9a63c5a30a18a44 != \9\d\2\a\9\3\5\a\4\2\8\1\2\b\0\e\e\9\a\6\3\c\5\a\3\0\a\1\8\a\4\4 ]] 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:32.131 11:12:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:32.131 [2024-11-15 11:12:18.737972] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:32.131 [2024-11-15 11:12:18.738326] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81667 ] 00:28:32.131 [2024-11-15 11:12:18.922932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.388 [2024-11-15 11:12:19.048972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.291  [2024-11-15T11:12:21.719Z] Copying: 590/1024 [MB] (590 MBps) [2024-11-15T11:12:23.098Z] Copying: 1024/1024 [MB] (average 605 MBps) 00:28:36.237 00:28:36.237 11:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:36.237 11:12:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=7100ae26102d69dbf130a7dd090a6904 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 7100ae26102d69dbf130a7dd090a6904 != \7\1\0\0\a\e\2\6\1\0\2\d\6\9\d\b\f\1\3\0\a\7\d\d\0\9\0\a\6\9\0\4 ]] 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81553 ]] 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81553 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81553 ']' 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81553 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81553 00:28:38.132 killing process with pid 81553 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81553' 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81553 00:28:38.132 11:12:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81553 00:28:39.509 [2024-11-15 11:12:26.082764] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:39.509 [2024-11-15 11:12:26.103026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.103094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:39.509 [2024-11-15 11:12:26.103111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:39.509 [2024-11-15 11:12:26.103123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.103148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:39.509 [2024-11-15 11:12:26.107424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.107468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:39.509 [2024-11-15 11:12:26.107483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.264 ms 00:28:39.509 [2024-11-15 11:12:26.107501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.107742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.107759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:39.509 [2024-11-15 11:12:26.107771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.199 ms 00:28:39.509 [2024-11-15 11:12:26.107782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.108910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.108948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:39.509 [2024-11-15 11:12:26.108961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.111 ms 00:28:39.509 [2024-11-15 11:12:26.108971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.109963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.110182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:39.509 [2024-11-15 11:12:26.110204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.948 ms 00:28:39.509 [2024-11-15 11:12:26.110216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.125859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.125937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:39.509 [2024-11-15 11:12:26.125953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.567 ms 00:28:39.509 [2024-11-15 11:12:26.125980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.134172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.134238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:39.509 [2024-11-15 11:12:26.134254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.150 ms 00:28:39.509 [2024-11-15 11:12:26.134266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.509 [2024-11-15 11:12:26.134390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.509 [2024-11-15 11:12:26.134405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:39.509 [2024-11-15 11:12:26.134417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.078 ms 00:28:39.509 [2024-11-15 11:12:26.134428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.149808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.149873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:39.510 [2024-11-15 11:12:26.149889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.373 ms 00:28:39.510 [2024-11-15 11:12:26.149899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.165257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.165503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:39.510 [2024-11-15 11:12:26.165542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.333 ms 00:28:39.510 [2024-11-15 11:12:26.165555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.181370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.181432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:39.510 [2024-11-15 11:12:26.181450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.775 ms 00:28:39.510 [2024-11-15 11:12:26.181460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.196894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.196967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:39.510 [2024-11-15 11:12:26.196984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.311 ms 00:28:39.510 [2024-11-15 11:12:26.196994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.197045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:39.510 [2024-11-15 11:12:26.197066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:39.510 [2024-11-15 11:12:26.197079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:39.510 [2024-11-15 11:12:26.197091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:39.510 [2024-11-15 11:12:26.197103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:39.510 [2024-11-15 11:12:26.197270] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:39.510 [2024-11-15 11:12:26.197280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 076dd754-373f-4c28-ad91-8848dad9fbe6 00:28:39.510 [2024-11-15 11:12:26.197291] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:39.510 [2024-11-15 11:12:26.197301] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:39.510 [2024-11-15 11:12:26.197311] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:39.510 [2024-11-15 11:12:26.197321] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:39.510 [2024-11-15 11:12:26.197331] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:39.510 [2024-11-15 11:12:26.197342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:39.510 [2024-11-15 11:12:26.197352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:39.510 [2024-11-15 11:12:26.197362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:39.510 [2024-11-15 11:12:26.197372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:39.510 [2024-11-15 11:12:26.197387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.197409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:39.510 [2024-11-15 11:12:26.197421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:28:39.510 [2024-11-15 11:12:26.197432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.217727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.217793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:39.510 [2024-11-15 11:12:26.217811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.285 ms 00:28:39.510 [2024-11-15 11:12:26.217821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.218388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:39.510 [2024-11-15 11:12:26.218402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:39.510 [2024-11-15 11:12:26.218413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:28:39.510 [2024-11-15 11:12:26.218423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.284245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.510 [2024-11-15 11:12:26.284319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:39.510 [2024-11-15 11:12:26.284335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.510 [2024-11-15 11:12:26.284346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.284414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.510 [2024-11-15 11:12:26.284425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:39.510 [2024-11-15 11:12:26.284436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.510 [2024-11-15 11:12:26.284446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.284621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.510 [2024-11-15 11:12:26.284637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:39.510 [2024-11-15 11:12:26.284648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.510 [2024-11-15 11:12:26.284658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.510 [2024-11-15 11:12:26.284679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.510 [2024-11-15 11:12:26.284694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:39.510 [2024-11-15 11:12:26.284705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.510 [2024-11-15 11:12:26.284715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.410659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.410750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:39.770 [2024-11-15 11:12:26.410768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.410779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:39.770 [2024-11-15 11:12:26.515289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:39.770 [2024-11-15 11:12:26.515445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:39.770 [2024-11-15 11:12:26.515558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:39.770 [2024-11-15 11:12:26.515724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:39.770 [2024-11-15 11:12:26.515805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:39.770 [2024-11-15 11:12:26.515880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.515933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:39.770 [2024-11-15 11:12:26.515945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:39.770 [2024-11-15 11:12:26.515959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:39.770 [2024-11-15 11:12:26.515968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:39.770 [2024-11-15 11:12:26.516101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 413.702 ms, result 0 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:41.173 Remove shared memory files 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81338 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:41.173 ************************************ 00:28:41.173 END TEST ftl_upgrade_shutdown 00:28:41.173 ************************************ 00:28:41.173 00:28:41.173 real 1m28.112s 00:28:41.173 user 2m1.919s 00:28:41.173 sys 0m22.411s 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:41.173 11:12:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@14 -- # killprocess 73924 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@954 -- # '[' -z 73924 ']' 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@958 -- # kill -0 73924 00:28:41.173 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73924) - No such process 00:28:41.173 Process with pid 73924 is not found 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 73924 is not found' 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81798 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:41.173 11:12:27 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81798 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@835 -- # '[' -z 81798 ']' 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.173 11:12:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:41.173 [2024-11-15 11:12:27.991340] Starting SPDK v25.01-pre git sha1 f1a181ac3 / DPDK 24.03.0 initialization... 00:28:41.173 [2024-11-15 11:12:27.991472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81798 ] 00:28:41.432 [2024-11-15 11:12:28.173465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.691 [2024-11-15 11:12:28.302151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.628 11:12:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.628 11:12:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:28:42.628 11:12:29 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:42.628 nvme0n1 00:28:42.628 11:12:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:42.887 11:12:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:42.887 11:12:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:42.888 11:12:29 ftl -- ftl/common.sh@28 -- # stores=7a19f9d8-46c3-4e41-8618-0b60ecf63afb 00:28:42.888 11:12:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:42.888 11:12:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7a19f9d8-46c3-4e41-8618-0b60ecf63afb 00:28:43.197 11:12:29 ftl -- ftl/ftl.sh@23 -- # killprocess 81798 00:28:43.197 11:12:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 81798 ']' 00:28:43.197 11:12:29 ftl -- common/autotest_common.sh@958 -- # kill -0 81798 00:28:43.197 11:12:29 ftl -- common/autotest_common.sh@959 -- # uname 00:28:43.197 11:12:29 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:43.197 11:12:29 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81798 00:28:43.197 killing process with pid 81798 00:28:43.197 11:12:30 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:43.197 11:12:30 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:43.197 11:12:30 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81798' 00:28:43.197 11:12:30 ftl -- common/autotest_common.sh@973 -- # kill 81798 00:28:43.197 11:12:30 ftl -- common/autotest_common.sh@978 -- # wait 81798 00:28:45.746 11:12:32 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:46.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:46.005 Waiting for block devices as requested 00:28:46.264 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:46.264 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:46.264 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:46.523 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:51.808 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:51.808 11:12:38 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:51.808 Remove shared memory files 00:28:51.809 11:12:38 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:51.809 11:12:38 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:51.809 11:12:38 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:51.809 11:12:38 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:51.809 11:12:38 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:51.809 11:12:38 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:51.809 ************************************ 00:28:51.809 END TEST ftl 00:28:51.809 ************************************ 00:28:51.809 00:28:51.809 real 11m39.201s 00:28:51.809 user 14m25.418s 00:28:51.809 sys 1m32.771s 00:28:51.809 11:12:38 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.809 11:12:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:51.809 11:12:38 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:51.809 11:12:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:51.809 11:12:38 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:51.809 11:12:38 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:28:51.809 11:12:38 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:51.809 11:12:38 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:51.809 11:12:38 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:28:51.809 11:12:38 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:28:51.809 11:12:38 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:28:51.809 11:12:38 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:28:51.809 11:12:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.809 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:28:51.809 11:12:38 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:28:51.809 11:12:38 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:28:51.809 11:12:38 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:28:51.809 11:12:38 -- common/autotest_common.sh@10 -- # set +x 00:28:54.344 INFO: APP EXITING 00:28:54.344 INFO: killing all VMs 00:28:54.344 INFO: killing vhost app 00:28:54.344 INFO: EXIT DONE 00:28:54.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:54.912 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:54.912 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:54.912 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:54.912 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:55.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:56.046 Cleaning 00:28:56.046 Removing: /var/run/dpdk/spdk0/config 00:28:56.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:56.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:56.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:56.046 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:56.046 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:56.046 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:56.046 Removing: /var/run/dpdk/spdk0 00:28:56.046 Removing: /var/run/dpdk/spdk_pid57595 00:28:56.046 Removing: /var/run/dpdk/spdk_pid57836 00:28:56.046 Removing: /var/run/dpdk/spdk_pid58065 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58169 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58225 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58359 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58382 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58592 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58704 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58811 00:28:56.047 Removing: /var/run/dpdk/spdk_pid58933 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59041 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59080 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59117 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59193 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59299 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59748 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59823 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59897 00:28:56.047 Removing: /var/run/dpdk/spdk_pid59913 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60070 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60086 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60245 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60263 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60333 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60355 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60420 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60438 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60640 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60671 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60760 00:28:56.047 Removing: /var/run/dpdk/spdk_pid60954 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61049 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61091 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61545 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61650 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61759 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61814 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61838 00:28:56.047 Removing: /var/run/dpdk/spdk_pid61922 00:28:56.047 Removing: /var/run/dpdk/spdk_pid62570 00:28:56.047 Removing: /var/run/dpdk/spdk_pid62612 00:28:56.047 Removing: /var/run/dpdk/spdk_pid63099 00:28:56.047 Removing: /var/run/dpdk/spdk_pid63203 00:28:56.047 Removing: /var/run/dpdk/spdk_pid63323 00:28:56.047 Removing: /var/run/dpdk/spdk_pid63376 00:28:56.047 Removing: /var/run/dpdk/spdk_pid63402 00:28:56.047 Removing: /var/run/dpdk/spdk_pid63427 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65321 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65465 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65473 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65485 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65531 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65535 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65547 00:28:56.047 Removing: /var/run/dpdk/spdk_pid65597 00:28:56.306 Removing: /var/run/dpdk/spdk_pid65601 00:28:56.306 Removing: /var/run/dpdk/spdk_pid65613 00:28:56.306 Removing: /var/run/dpdk/spdk_pid65658 00:28:56.306 Removing: /var/run/dpdk/spdk_pid65662 00:28:56.306 Removing: /var/run/dpdk/spdk_pid65674 00:28:56.306 Removing: /var/run/dpdk/spdk_pid67079 00:28:56.306 Removing: /var/run/dpdk/spdk_pid67188 00:28:56.306 Removing: /var/run/dpdk/spdk_pid68618 00:28:56.306 Removing: /var/run/dpdk/spdk_pid69991 00:28:56.306 Removing: /var/run/dpdk/spdk_pid70107 00:28:56.306 Removing: /var/run/dpdk/spdk_pid70224 00:28:56.306 Removing: /var/run/dpdk/spdk_pid70331 00:28:56.306 Removing: /var/run/dpdk/spdk_pid70457 00:28:56.306 Removing: /var/run/dpdk/spdk_pid70537 00:28:56.306 Removing: /var/run/dpdk/spdk_pid70690 00:28:56.306 Removing: /var/run/dpdk/spdk_pid71066 00:28:56.306 Removing: /var/run/dpdk/spdk_pid71108 00:28:56.306 Removing: /var/run/dpdk/spdk_pid71563 00:28:56.306 Removing: /var/run/dpdk/spdk_pid71757 00:28:56.306 Removing: /var/run/dpdk/spdk_pid71859 00:28:56.306 Removing: /var/run/dpdk/spdk_pid71964 00:28:56.306 Removing: /var/run/dpdk/spdk_pid72025 00:28:56.306 Removing: /var/run/dpdk/spdk_pid72055 00:28:56.306 Removing: /var/run/dpdk/spdk_pid72367 00:28:56.306 Removing: /var/run/dpdk/spdk_pid72433 00:28:56.306 Removing: /var/run/dpdk/spdk_pid72524 00:28:56.306 Removing: /var/run/dpdk/spdk_pid72958 00:28:56.306 Removing: /var/run/dpdk/spdk_pid73113 00:28:56.306 Removing: /var/run/dpdk/spdk_pid73924 00:28:56.306 Removing: /var/run/dpdk/spdk_pid74068 00:28:56.306 Removing: /var/run/dpdk/spdk_pid74304 00:28:56.306 Removing: /var/run/dpdk/spdk_pid74407 00:28:56.306 Removing: /var/run/dpdk/spdk_pid74786 00:28:56.306 Removing: /var/run/dpdk/spdk_pid75046 00:28:56.306 Removing: /var/run/dpdk/spdk_pid75409 00:28:56.306 Removing: /var/run/dpdk/spdk_pid75618 00:28:56.306 Removing: /var/run/dpdk/spdk_pid75766 00:28:56.306 Removing: /var/run/dpdk/spdk_pid75835 00:28:56.306 Removing: /var/run/dpdk/spdk_pid75978 00:28:56.306 Removing: /var/run/dpdk/spdk_pid76019 00:28:56.306 Removing: /var/run/dpdk/spdk_pid76085 00:28:56.306 Removing: /var/run/dpdk/spdk_pid76291 00:28:56.306 Removing: /var/run/dpdk/spdk_pid76545 00:28:56.306 Removing: /var/run/dpdk/spdk_pid77022 00:28:56.306 Removing: /var/run/dpdk/spdk_pid77477 00:28:56.306 Removing: /var/run/dpdk/spdk_pid77967 00:28:56.306 Removing: /var/run/dpdk/spdk_pid78504 00:28:56.306 Removing: /var/run/dpdk/spdk_pid78647 00:28:56.306 Removing: /var/run/dpdk/spdk_pid78741 00:28:56.306 Removing: /var/run/dpdk/spdk_pid79372 00:28:56.306 Removing: /var/run/dpdk/spdk_pid79447 00:28:56.306 Removing: /var/run/dpdk/spdk_pid79916 00:28:56.306 Removing: /var/run/dpdk/spdk_pid80295 00:28:56.306 Removing: /var/run/dpdk/spdk_pid80774 00:28:56.306 Removing: /var/run/dpdk/spdk_pid80902 00:28:56.306 Removing: /var/run/dpdk/spdk_pid80955 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81019 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81069 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81143 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81338 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81409 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81475 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81553 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81593 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81667 00:28:56.565 Removing: /var/run/dpdk/spdk_pid81798 00:28:56.565 Clean 00:28:56.565 11:12:43 -- common/autotest_common.sh@1453 -- # return 0 00:28:56.565 11:12:43 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:28:56.565 11:12:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.565 11:12:43 -- common/autotest_common.sh@10 -- # set +x 00:28:56.565 11:12:43 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:28:56.565 11:12:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.565 11:12:43 -- common/autotest_common.sh@10 -- # set +x 00:28:56.565 11:12:43 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:56.565 11:12:43 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:56.565 11:12:43 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:56.565 11:12:43 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:28:56.565 11:12:43 -- spdk/autotest.sh@398 -- # hostname 00:28:56.565 11:12:43 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:56.824 geninfo: WARNING: invalid characters removed from testname! 00:29:23.363 11:13:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:26.649 11:13:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:28.571 11:13:15 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:31.106 11:13:17 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:33.008 11:13:19 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:35.540 11:13:22 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:38.071 11:13:24 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:38.071 11:13:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:38.071 11:13:24 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:38.071 11:13:24 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:38.071 11:13:24 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:38.071 11:13:24 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:38.071 + [[ -n 5242 ]] 00:29:38.071 + sudo kill 5242 00:29:38.079 [Pipeline] } 00:29:38.089 [Pipeline] // timeout 00:29:38.094 [Pipeline] } 00:29:38.104 [Pipeline] // stage 00:29:38.108 [Pipeline] } 00:29:38.118 [Pipeline] // catchError 00:29:38.126 [Pipeline] stage 00:29:38.127 [Pipeline] { (Stop VM) 00:29:38.137 [Pipeline] sh 00:29:38.414 + vagrant halt 00:29:41.702 ==> default: Halting domain... 00:29:48.293 [Pipeline] sh 00:29:48.581 + vagrant destroy -f 00:29:51.861 ==> default: Removing domain... 00:29:52.453 [Pipeline] sh 00:29:52.742 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:29:52.750 [Pipeline] } 00:29:52.765 [Pipeline] // stage 00:29:52.770 [Pipeline] } 00:29:52.780 [Pipeline] // dir 00:29:52.784 [Pipeline] } 00:29:52.797 [Pipeline] // wrap 00:29:52.802 [Pipeline] } 00:29:52.812 [Pipeline] // catchError 00:29:52.820 [Pipeline] stage 00:29:52.821 [Pipeline] { (Epilogue) 00:29:52.833 [Pipeline] sh 00:29:53.111 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:59.736 [Pipeline] catchError 00:29:59.738 [Pipeline] { 00:29:59.751 [Pipeline] sh 00:30:00.031 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:00.292 Artifacts sizes are good 00:30:00.301 [Pipeline] } 00:30:00.316 [Pipeline] // catchError 00:30:00.328 [Pipeline] archiveArtifacts 00:30:00.336 Archiving artifacts 00:30:00.476 [Pipeline] cleanWs 00:30:00.492 [WS-CLEANUP] Deleting project workspace... 00:30:00.492 [WS-CLEANUP] Deferred wipeout is used... 00:30:00.505 [WS-CLEANUP] done 00:30:00.507 [Pipeline] } 00:30:00.523 [Pipeline] // stage 00:30:00.529 [Pipeline] } 00:30:00.543 [Pipeline] // node 00:30:00.549 [Pipeline] End of Pipeline 00:30:00.594 Finished: SUCCESS